THE CATHOLIC UNIVERSITY OF MALAWI
BSME 3102: EVALUATION OF DEVELOPMENT POLICIES, PROGRAMS AND PROJECTS
KASSAN KASELEMA
kkaselema@cunima.ac.mw
DETERMINING POINTS OF IMPACT EVALUATION
Impact refers to the positive and negative, intended and unintended,
direct and indirect, primary and secondary effects produced by an
intervention.
Imapct evaluations measures the change in a development outcome that
is attributable to a defined intervention.
Impact evaluation is a systematic and empirical assessment of the effects
brought about by an intervention
ASPECTS OF IMPACT EVALUATION
FORMATIVE EVALUATION
used in the early stages or development of a program or initiative.
This type of evaluation assesses the progress and effectiveness of a program while it is still being
implemented.
Formative evaluations are often used to make early improvements or adjustments to improve the
program, evaluate the quality of the program design and implementation strategies, and ensure
that the program is aligned with its intended goals.
Community needs assessments done ahead of or in the early stages of a program or project is one
example of a formative evaluation.
Implementing a formative evaluation can help practitioners ensure programming begins with a
solid foundation. By identifying potential issues or areas for improvement early in the process,
formative evaluations help practitioners and policymakers make necessary adjustments before an
initiative is fully implemented. In this way, formative evaluations help to ensure that initiatives
start on the right track and have a greater chance of success.
SUMMATIVE EVALUATION
often used to determine whether the program achieved its intended goals and objectives
by assessing the overall effectiveness of a program after it has been completed.
Summative evaluations collect information from multiple sources over time to
demonstrate the evidence of the program’s effectiveness. Summative evaluations are
often best used for multi-year programs––where there is sufficient time for practitioners
to make adjustments learned from formative evaluations. Summative evaluations can be
helpful when deciding whether to continue, end, or expand a program.
While summative evaluations are done after a project is completed, the planning for the
evaluation needs to start before the project – you need to make sure you have a plan for
what you’re going to evaluate and the impact you want to see ahead of time so you know
if you’ve achieved your goals.
RATIONALE OF IMPACT EVALUATION
To decide whether to fund an intervention – ‘ex Ante evaluation is conducted before
an intervention is implemented to estimate its likely impacts and inform funding
decisions.
To decide whether or not to continue or expand an intervention
To learn how to replicate scale up a pilot
To learn how to successfully adapt a successful intervention to suit another context.
To show accountability
reassure funders, including donors and tax payers that money is being wisely
invested. –
inform intended beneficiaries and communities about whether or not, and in what
ways, a program is benefiting the community
CHOOSING METHODS FOR IMPACT EVALUATION
CONSIDERATIONS FOR QUALITY IMPACT EVALUATIONS
Utility – results of the evaluation must be useful to those who require the
information.
Accuracy – ensuring that findings are reported fairly, comprehensively and
clearly.
Propriety – ethical issues of confidentiality and anonymity, as well as potential
harmful effects of being involved in the evaluation must be adequately addressed.
Practicality – taking into account the available resources (time, money, expertise
and existing data) and when the results are needed to inform decisions.
Accountability – refers to presenting clear evidence and criteria on which
conclusions have been drawn, and acknowledging their limitation.Transparency
about data source is important.
TYPES OF EVALUATION QUESTIONS
Any evaluation begins with the formulation of a study question that focuses the research
and that is tailored to the policy interest at hand. The evaluation then consists of
generating credible evidence to answer that question.
the basic impact evaluation question can be formulated as;
What is the impact or causal effect of the program on an outcome of interest?
What is the effect of the Health Insurance Subsidy?
Program on households’ out-of-pocket health expenditures?
The question can also be oriented toward testing options, such as, Which combination of
mail campaigns and family counseling works best to encourage exclusive breast feeding? A
clear evaluation question is the starting point of any effective evaluation.
EVALUATION QUESTIONS
Evaluation type Why it is used? When Evaluation
questions
Methods
Formative To make early
improvements,
evaluate the
quality, and to
ensure that the
program is
aligned with its
intended goals
At the beginning Is the program
reaching its
intended
participants?
How are inputs
contributing to
program
functioning?
Survey
Focus group
Interview
Document review
Summative To demonstrate
the effectiveness
of a program
At the end Did participants
experience the
desired
outcomes?
What changes
were made to
improve the
quality of the
program?
Survey
Focus group
Interview
Document review
Case study
THEORY OF CHANGE
A theory of change is a description of how an intervention is supposed to deliver the desired
results. It describes the causal logic of how and why a particular project, program, or policy
will reach its intended outcomes.
 Is at policy level
 Pathway towards reaching desired end
 Preconditions for reaching desired results
 Is principle based –gives guidance for development and implementation of programs
 Does not contain ‘sequential process of change’
 ToC at its best when combines logical thinking, critical reflection and dialogue
 Both a process and a product, with subjective limits
 ToC in programmes inspires innovation, supports improvement and adaptive management
 Best kept flexible, not prescribed or mandatory
 Requires managers and funders to get more comfortable with emergence and flexibility
RESULTS CHAIN
A results chain is a diagram that illustrates a project team’s theory of
change using a series of boxes and arrows.
Due to the causal, if-then sequence of a results chain, it also shows the
chronological and temporal nature of expected results.
Results chains are a visual tool for showing what a project is doing and
why.
They explain all the links in the chain from project actions to market actor
changes, through to impacts on target groups, in detail, for a particular
intervention.
They can be used to monitor change and adapt strategy on an ongoing
basis.
KEY PRINCIPLES OF RESULT
CHAIN
Results chains need to remember the principle of Facilitation – which can be
difficult when trying to predict a series of changes and to attribute the cause of
those changes to the project.
What is important is to remember that ultimately the market system exists and
operates regardless of our presence.
The best we can hope for as facilitators is to influence it in a positive direction.
Similarly, for a results chain to align with Systems Thinking, it’s important to
thoroughly document assumptions and keep in mind other influences that can
lead to unexpected responses or changes.
Results chains can be designed to reflect Gender, allowing a project to
disaggregate change as it affects men and women.
ELEMENTS OF A RESULT CHAIN
Results chains show causal “if…then” relationships between factors. For example, if
we implement a strategy, then we expect to achieve the first intermediate result. If we
achieve the first intermediate result, then we expect to achieve the second
intermediate result and so on and so forth until we reach a threat reduction result. If
we successfully reduce a threat, we expect to maintain or improve the target.
BENEFITS OF A RESULTS CHAIN
Document assumptions and be explicit.
Document existing evidence and uncertainty
Define how actions achieve results.
Define realistic timelines.
Identify interim results
Develop objectives.
Facilitate targeted monitoring and evaluation.
A results chain sets out a logical, plausible outline of how a sequence of inputs, activities, and
outputs for which a project is directly responsible interacts with behavior to establish pathways
through which impacts are achieved.
It establishes the causal logic from the initiation of the project, beginning with resources
available, to the end, looking at long-term goals. A basic results chain will map the following
elements:
Inputs: Resources at the disposal of the project, including staff and budget
Activities: Actions taken or work performed to convert inputs into outputs
Outputs: The tangible goods and services that the project activities produce (They are directly
under the control of the implementing agency.)
Outcomes: Results likely to be achieved once the benefi ciary population uses the project
outputs (They are usually achieved in the short-to-medium term.)
Final outcomes: The fi nal project goals (They can be infl uenced by multiple factors and are
typically achieved over a longer period of time.)
PARTS OF A RESULT CHAIN
1) IMPLEMENTATION
2) RESULTS
3) RISKS
Implementation:
Planned work delivered by the project, including inputs, activities, and outputs.
These are the areas that the implementation agency can directly monitor to
measure the project’s performance.
Results:
Intended results consist of the outcomes and final outcomes, which are not
under the direct control of the project and are contingent on behavioral
changes by program beneficiaries. In other words, they depend on the
interactions between the supply side (implementation) and the demand side
(beneficiaries). These are the areas subject to impact evaluation to measure
effectiveness.
Assumptions and risks:
They include any evidence from the literature on the proposed causal logic and
the assumptions on which it relies, references to similar programs’
performance, and a mention of risks that may affect the realization of intended
results and any mitigation strategy put in place to manage those risks.
EVALUATION HYPOTHESIS
Once you have outlined the results chain, you can formulate the hypotheses that you would
like to test using the impact evaluation.
In the high school mathematics example, the hypotheses to be tested could be the following;
• The new curriculum is superior to the old one in imparting knowledge of
mathematics.
• Trained teachers use the new curriculum in a more effective way than other
teachers.
• If we train the teachers and distribute the textbooks, then the teachers will use the
new textbooks and curriculum in class, and the students will follow the curriculum.
• If we train the teachers and distribute the textbooks, then the math test results
will improve by 5 points on average.
• Performance in high school mathematics influences completion rates and labor
market performance.
SELECTING PERFORMANCE INDICATORS
A clearly articulated results chain provides a useful map for selecting the
indicators that will be measured along the chain.They will include indicators used
both to monitor program implementation and to evaluate results. Again, it is useful to
engage program stakeholders in selecting these indicators,
to ensure that the ones selected are good measures of program performance.The
acronym SMART is a widely used and useful rule of thumb to ensure that indicators
used are;
• Specific: to measure the information required as closely as possible
• Measurable: to ensure that the information can be readily obtained
• Attributable: to ensure that each measure is linked to the project’s eff orts
• Realistic: to ensure that the data can be obtained in a timely fashion, with
reasonable frequency, and at reasonable cost
• Targeted: to the objective population.
When choosing indicators, remember that it is important to identify indicators all
along the results chain, and not just at the level of outcomes, so that you will be able to
track the causal logic of any program outcomes that are observed.
Even when you implement an impact evaluation, it is still important to track
implementation indicators, so you can determine whether interventions have been
carried out as planned, whether they have reached their intended beneficiaries, and
whether they arrived on time .
Without these indicators all along the results chain, the impact evaluation will
produce only a “black box” that identifies whether or not the predicted results
materialized; it will not be able to explain why that was the case.
Apart from selecting the indicators, it is also useful to consider the arrangements for
producing the data.
IMPACT EVALUATION INDICATORS
Impact evaluations use the standard OECD-DAC criteria (OECD-DAC accessed 2015);
•Relevance: The extent to which the objectives of an intervention are consistent with
recipients’ requirements, country needs, global priorities and partners’ policies.
•Effectiveness: The extent to which the intervention’s objectives were achieved, or are
expected to be achieved, taking into account their relative importance.
•Efficiency: A measure of how economically resources/inputs (funds, expertise, time,
equipment, etc.) are converted into results.
•Impact: Positive and negative primary and secondary long-term effects produced by the
intervention, whether directly or indirectly, intended or unintended.
•Sustainability: The continuation of benefits from the intervention after major development
assistance has ceased. Interventions must be both environmentally and financially sustainable.
TYPES OF IMPACT INDICATORS
Situational (impact) indicators
Outcome indicators
Output indicators;
1. SITUATIONAL INDICATORS
Describe the national development situation.
They relate to the Millennium Development Goals and the SRF Goals and Sub-
goals, and reflect long-term development results, or impact.
Situational indicators provide a broad picture of country development status
(macro baseline).
They are most useful to the country office senior management, informing the
level at which senior management interacts with partners and develops
strategies.
Specific examples of situational indicators include the signature UNDP-
initiated development indicators such as the human development index
(HDI) and the human poverty index (HPI) as well as others developed by
the OECD and adopted by the United Nations system
2. OUTCOME INDICATORS
Help the organization and country offices think strategically about the key results
or outcomes they want to achieve.
They help verify that the intended positive change in the development situation
has actually taken place.
3. Output indicators
Output indicators help to measure and verify the production of outputs.
Outputs are tangible results that can be delivered within a short timeframe.
This means that the output itself may be measurable and may clearly indicate how
to verify that it has been produced.
Output indicators are most useful to project managers, who are responsible for the
production of outputs and their relevance to the outcome in question.

LESSON V - Determining points of Impact Evaluation.pptx

  • 1.
    THE CATHOLIC UNIVERSITYOF MALAWI BSME 3102: EVALUATION OF DEVELOPMENT POLICIES, PROGRAMS AND PROJECTS KASSAN KASELEMA kkaselema@cunima.ac.mw
  • 2.
    DETERMINING POINTS OFIMPACT EVALUATION Impact refers to the positive and negative, intended and unintended, direct and indirect, primary and secondary effects produced by an intervention. Imapct evaluations measures the change in a development outcome that is attributable to a defined intervention. Impact evaluation is a systematic and empirical assessment of the effects brought about by an intervention
  • 3.
  • 4.
    FORMATIVE EVALUATION used inthe early stages or development of a program or initiative. This type of evaluation assesses the progress and effectiveness of a program while it is still being implemented. Formative evaluations are often used to make early improvements or adjustments to improve the program, evaluate the quality of the program design and implementation strategies, and ensure that the program is aligned with its intended goals. Community needs assessments done ahead of or in the early stages of a program or project is one example of a formative evaluation. Implementing a formative evaluation can help practitioners ensure programming begins with a solid foundation. By identifying potential issues or areas for improvement early in the process, formative evaluations help practitioners and policymakers make necessary adjustments before an initiative is fully implemented. In this way, formative evaluations help to ensure that initiatives start on the right track and have a greater chance of success.
  • 5.
    SUMMATIVE EVALUATION often usedto determine whether the program achieved its intended goals and objectives by assessing the overall effectiveness of a program after it has been completed. Summative evaluations collect information from multiple sources over time to demonstrate the evidence of the program’s effectiveness. Summative evaluations are often best used for multi-year programs––where there is sufficient time for practitioners to make adjustments learned from formative evaluations. Summative evaluations can be helpful when deciding whether to continue, end, or expand a program. While summative evaluations are done after a project is completed, the planning for the evaluation needs to start before the project – you need to make sure you have a plan for what you’re going to evaluate and the impact you want to see ahead of time so you know if you’ve achieved your goals.
  • 6.
    RATIONALE OF IMPACTEVALUATION To decide whether to fund an intervention – ‘ex Ante evaluation is conducted before an intervention is implemented to estimate its likely impacts and inform funding decisions. To decide whether or not to continue or expand an intervention To learn how to replicate scale up a pilot To learn how to successfully adapt a successful intervention to suit another context. To show accountability reassure funders, including donors and tax payers that money is being wisely invested. – inform intended beneficiaries and communities about whether or not, and in what ways, a program is benefiting the community
  • 7.
    CHOOSING METHODS FORIMPACT EVALUATION
  • 8.
    CONSIDERATIONS FOR QUALITYIMPACT EVALUATIONS Utility – results of the evaluation must be useful to those who require the information. Accuracy – ensuring that findings are reported fairly, comprehensively and clearly. Propriety – ethical issues of confidentiality and anonymity, as well as potential harmful effects of being involved in the evaluation must be adequately addressed. Practicality – taking into account the available resources (time, money, expertise and existing data) and when the results are needed to inform decisions. Accountability – refers to presenting clear evidence and criteria on which conclusions have been drawn, and acknowledging their limitation.Transparency about data source is important.
  • 9.
    TYPES OF EVALUATIONQUESTIONS Any evaluation begins with the formulation of a study question that focuses the research and that is tailored to the policy interest at hand. The evaluation then consists of generating credible evidence to answer that question. the basic impact evaluation question can be formulated as; What is the impact or causal effect of the program on an outcome of interest? What is the effect of the Health Insurance Subsidy? Program on households’ out-of-pocket health expenditures? The question can also be oriented toward testing options, such as, Which combination of mail campaigns and family counseling works best to encourage exclusive breast feeding? A clear evaluation question is the starting point of any effective evaluation.
  • 10.
  • 11.
    Evaluation type Whyit is used? When Evaluation questions Methods Formative To make early improvements, evaluate the quality, and to ensure that the program is aligned with its intended goals At the beginning Is the program reaching its intended participants? How are inputs contributing to program functioning? Survey Focus group Interview Document review Summative To demonstrate the effectiveness of a program At the end Did participants experience the desired outcomes? What changes were made to improve the quality of the program? Survey Focus group Interview Document review Case study
  • 12.
    THEORY OF CHANGE Atheory of change is a description of how an intervention is supposed to deliver the desired results. It describes the causal logic of how and why a particular project, program, or policy will reach its intended outcomes.  Is at policy level  Pathway towards reaching desired end  Preconditions for reaching desired results  Is principle based –gives guidance for development and implementation of programs  Does not contain ‘sequential process of change’  ToC at its best when combines logical thinking, critical reflection and dialogue  Both a process and a product, with subjective limits  ToC in programmes inspires innovation, supports improvement and adaptive management  Best kept flexible, not prescribed or mandatory  Requires managers and funders to get more comfortable with emergence and flexibility
  • 13.
    RESULTS CHAIN A resultschain is a diagram that illustrates a project team’s theory of change using a series of boxes and arrows. Due to the causal, if-then sequence of a results chain, it also shows the chronological and temporal nature of expected results. Results chains are a visual tool for showing what a project is doing and why. They explain all the links in the chain from project actions to market actor changes, through to impacts on target groups, in detail, for a particular intervention. They can be used to monitor change and adapt strategy on an ongoing basis.
  • 14.
    KEY PRINCIPLES OFRESULT CHAIN Results chains need to remember the principle of Facilitation – which can be difficult when trying to predict a series of changes and to attribute the cause of those changes to the project. What is important is to remember that ultimately the market system exists and operates regardless of our presence. The best we can hope for as facilitators is to influence it in a positive direction. Similarly, for a results chain to align with Systems Thinking, it’s important to thoroughly document assumptions and keep in mind other influences that can lead to unexpected responses or changes. Results chains can be designed to reflect Gender, allowing a project to disaggregate change as it affects men and women.
  • 15.
    ELEMENTS OF ARESULT CHAIN Results chains show causal “if…then” relationships between factors. For example, if we implement a strategy, then we expect to achieve the first intermediate result. If we achieve the first intermediate result, then we expect to achieve the second intermediate result and so on and so forth until we reach a threat reduction result. If we successfully reduce a threat, we expect to maintain or improve the target.
  • 17.
    BENEFITS OF ARESULTS CHAIN Document assumptions and be explicit. Document existing evidence and uncertainty Define how actions achieve results. Define realistic timelines. Identify interim results Develop objectives. Facilitate targeted monitoring and evaluation.
  • 18.
    A results chainsets out a logical, plausible outline of how a sequence of inputs, activities, and outputs for which a project is directly responsible interacts with behavior to establish pathways through which impacts are achieved. It establishes the causal logic from the initiation of the project, beginning with resources available, to the end, looking at long-term goals. A basic results chain will map the following elements: Inputs: Resources at the disposal of the project, including staff and budget Activities: Actions taken or work performed to convert inputs into outputs Outputs: The tangible goods and services that the project activities produce (They are directly under the control of the implementing agency.) Outcomes: Results likely to be achieved once the benefi ciary population uses the project outputs (They are usually achieved in the short-to-medium term.) Final outcomes: The fi nal project goals (They can be infl uenced by multiple factors and are typically achieved over a longer period of time.)
  • 19.
    PARTS OF ARESULT CHAIN 1) IMPLEMENTATION 2) RESULTS 3) RISKS
  • 20.
    Implementation: Planned work deliveredby the project, including inputs, activities, and outputs. These are the areas that the implementation agency can directly monitor to measure the project’s performance. Results: Intended results consist of the outcomes and final outcomes, which are not under the direct control of the project and are contingent on behavioral changes by program beneficiaries. In other words, they depend on the interactions between the supply side (implementation) and the demand side (beneficiaries). These are the areas subject to impact evaluation to measure effectiveness. Assumptions and risks: They include any evidence from the literature on the proposed causal logic and the assumptions on which it relies, references to similar programs’ performance, and a mention of risks that may affect the realization of intended results and any mitigation strategy put in place to manage those risks.
  • 22.
    EVALUATION HYPOTHESIS Once youhave outlined the results chain, you can formulate the hypotheses that you would like to test using the impact evaluation. In the high school mathematics example, the hypotheses to be tested could be the following; • The new curriculum is superior to the old one in imparting knowledge of mathematics. • Trained teachers use the new curriculum in a more effective way than other teachers. • If we train the teachers and distribute the textbooks, then the teachers will use the new textbooks and curriculum in class, and the students will follow the curriculum. • If we train the teachers and distribute the textbooks, then the math test results will improve by 5 points on average. • Performance in high school mathematics influences completion rates and labor market performance.
  • 23.
    SELECTING PERFORMANCE INDICATORS Aclearly articulated results chain provides a useful map for selecting the indicators that will be measured along the chain.They will include indicators used both to monitor program implementation and to evaluate results. Again, it is useful to engage program stakeholders in selecting these indicators, to ensure that the ones selected are good measures of program performance.The acronym SMART is a widely used and useful rule of thumb to ensure that indicators used are; • Specific: to measure the information required as closely as possible • Measurable: to ensure that the information can be readily obtained • Attributable: to ensure that each measure is linked to the project’s eff orts • Realistic: to ensure that the data can be obtained in a timely fashion, with reasonable frequency, and at reasonable cost • Targeted: to the objective population.
  • 24.
    When choosing indicators,remember that it is important to identify indicators all along the results chain, and not just at the level of outcomes, so that you will be able to track the causal logic of any program outcomes that are observed. Even when you implement an impact evaluation, it is still important to track implementation indicators, so you can determine whether interventions have been carried out as planned, whether they have reached their intended beneficiaries, and whether they arrived on time . Without these indicators all along the results chain, the impact evaluation will produce only a “black box” that identifies whether or not the predicted results materialized; it will not be able to explain why that was the case. Apart from selecting the indicators, it is also useful to consider the arrangements for producing the data.
  • 25.
    IMPACT EVALUATION INDICATORS Impactevaluations use the standard OECD-DAC criteria (OECD-DAC accessed 2015); •Relevance: The extent to which the objectives of an intervention are consistent with recipients’ requirements, country needs, global priorities and partners’ policies. •Effectiveness: The extent to which the intervention’s objectives were achieved, or are expected to be achieved, taking into account their relative importance. •Efficiency: A measure of how economically resources/inputs (funds, expertise, time, equipment, etc.) are converted into results. •Impact: Positive and negative primary and secondary long-term effects produced by the intervention, whether directly or indirectly, intended or unintended. •Sustainability: The continuation of benefits from the intervention after major development assistance has ceased. Interventions must be both environmentally and financially sustainable.
  • 26.
    TYPES OF IMPACTINDICATORS Situational (impact) indicators Outcome indicators Output indicators;
  • 27.
    1. SITUATIONAL INDICATORS Describethe national development situation. They relate to the Millennium Development Goals and the SRF Goals and Sub- goals, and reflect long-term development results, or impact. Situational indicators provide a broad picture of country development status (macro baseline). They are most useful to the country office senior management, informing the level at which senior management interacts with partners and develops strategies. Specific examples of situational indicators include the signature UNDP- initiated development indicators such as the human development index (HDI) and the human poverty index (HPI) as well as others developed by the OECD and adopted by the United Nations system
  • 28.
    2. OUTCOME INDICATORS Helpthe organization and country offices think strategically about the key results or outcomes they want to achieve. They help verify that the intended positive change in the development situation has actually taken place. 3. Output indicators Output indicators help to measure and verify the production of outputs. Outputs are tangible results that can be delivered within a short timeframe. This means that the output itself may be measurable and may clearly indicate how to verify that it has been produced. Output indicators are most useful to project managers, who are responsible for the production of outputs and their relevance to the outcome in question.