I gave this talk at a Nigeria Health Summit in March 2016. It was an introduction to impact evaluation: what it is, when it's a good idea, and some possible approaches.
I gave this talk at a Nigeria Health Summit in March 2016. It was an introduction to impact evaluation: what it is, when it's a good idea, and some possible approaches.
Planning, monitoring & evaluation of health care programarijitkundu88
this presentation is for the basic idea of planning monitoring and evaluation of health care programs. the details steps of planning is covered. i hope it will help all the persons interested in public health and different health programs.
Two Examples of Program Planning, Monitoring and EvaluationMEASURE Evaluation
Presented by Laili Irani, Senior Policy Analyst for the Population Reference Bureau, as part of the Measuring Success Toolkit webinar in September 2012.
Planning the Evaluation
Impact models
Types of inference and choice of design
Defining the indicators and obtaining the data
Carrying out the evaluation
Disseminating evaluation findings
Working in large-scale evaluations
Information may be time-sensitive. Subscribers should use the information contained at their own risk. Please check latest information with Dr. A by emailing bugdoctor@auburn.edu.
This university lecture at Carleton University shares various evaluation research designs that can be used with community based organizations, especially when a comparison group cannot be identified (i.e. implicit designs and regression discontinuity)
Planning, monitoring & evaluation of health care programarijitkundu88
this presentation is for the basic idea of planning monitoring and evaluation of health care programs. the details steps of planning is covered. i hope it will help all the persons interested in public health and different health programs.
Two Examples of Program Planning, Monitoring and EvaluationMEASURE Evaluation
Presented by Laili Irani, Senior Policy Analyst for the Population Reference Bureau, as part of the Measuring Success Toolkit webinar in September 2012.
Planning the Evaluation
Impact models
Types of inference and choice of design
Defining the indicators and obtaining the data
Carrying out the evaluation
Disseminating evaluation findings
Working in large-scale evaluations
Information may be time-sensitive. Subscribers should use the information contained at their own risk. Please check latest information with Dr. A by emailing bugdoctor@auburn.edu.
This university lecture at Carleton University shares various evaluation research designs that can be used with community based organizations, especially when a comparison group cannot be identified (i.e. implicit designs and regression discontinuity)
During the 2015 American Evaluation Association's Annual Conference in Chicago, Katherine Haugh and Deborah Grodzicki conducted a real time data mini-study to see which evaluation approaches evaluators at #eval15 use most frequently in their work. Basing their mini-study off of Marvin C. Alkin's "Evaluation Roots: A Wider Perspective of Theorists’ Views and Influences," they asked evaluators to vote for the top two approaches they used most often. This handout accompanied the real time data mini-study to provide more information about the formation of the evaluation theory tree, it's three branches, and definitions of the evaluation approaches associated with each branch.
1) To understand the underlying structure of Time Series represented by sequence of observations by breaking it down to its components.
2) To fit a mathematical model and proceed to forecast the future.
Introduces common data management techniques in Stata. Topics covered include basic data manipulation commands such as: recoding variables, creating new variables, working with missing data, and generating variables based on complex selection criteria, merging and collapsing data sets. Intended for users who have an introductory level of knowledge of Stata software.
All workshop materials including slides, do files, and example data sets can be downloaded from http://projects.iq.harvard.edu/rtc/event/data-management-stata
Training Impact assessment or capacity development impact assessment pdfJayanta Dutta
The document contains details of concept, methods and approaches of training impact assessment which will be helpful for M.Sc(Agri) students under the course titled Capacity Development. The examples given in the document will help the students to understand the concept well.
International Food Policy Research Institute (IFPRI) organized a three days Training Workshop on ‘Monitoring and Evaluation Methods’ on 10-12 March 2014 in New Delhi, India. The workshop is part of an IFAD grant to IFPRI to partner in the Monitoring and Evaluation component of the ongoing projects in the region. The three day workshop is intended to be a collaborative affair between project directors, M & E leaders and M & E experts. As part of the workshop, detailed interaction will take place on the evaluation routines involving sampling, questionnaire development, data collection and management techniques and production of an evaluation report. The workshop is designed to better understand the M & E needs of various projects that are at different stages of implementation. Both the generic issues involved in M & E programs as well as project specific needs will be addressed in the workshop. The objective of the workshop is to come up with a work plan for M & E domains in the IFAD projects and determine the possibilities of collaboration between IFPRI and project leaders.
Presented by Pascale Schnitzer and Carlo Azzarri, IFPRI at the Africa RISING–CSISA Joint Monitoring and Evaluation Meeting, Addis Ababa, Ethiopia, 11-13 November 2013
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...OECD CFE
Presentation by Jonathan Potter, OECD LEED Senior Policy Analyst, and Stuart Thompson, OECD LEED Policy Analys, tat the seminar organised by the OECD LEED Trento Centre for the Officers of the Autonomous Province of Trento on 13 November 2015.
https://www.trento.oecd.org
Presentation is about the uniqueness of Implementation Research and Role of the Government, specially in Indian context of health programme implementation.
Applying impact evaluation tools for integrating agricultural sectors in Nati...UNDP Climate
- Uganda and Zambia are carrying out activities to better assess adaptation options through cost-benefit analysis and impact evaluation exercises, as part of the Integrating Agriculture in National Adaptation Plans (NAP-Ag) Programme led by FAO and UNDP.
Both Uganda and Zambia are also paving way for gender mainstreaming into National Adaptation Plans, with recent cross-sectoral workshops held in May and June to discuss these topics and pave the way for integrated strategies.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Gilligan quantitative impact eval methods
1. INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE
Quantitative Impact
Evaluation Methods
Dan Gilligan, IFPRI
INTERNATIONAL LIVESTOCK RESEARCH INSTITUTE
2. An Introduction to
Quantitative Impact Evaluation
I. Why is impact evaluation important?
• What are appropriate goals for an impact
evaluation?
• Monitoring and evaluation
II. How do you design an impact evaluation?
• The evaluation problem
• Measuring causal impact
• Impact evaluation methodologies
3. Introduction (cont‟d)
III. Impact Evaluation and Measurement Tools
• Choice of evaluation estimator
• Data requirements
• How to randomize
• Sample design
• Sample size
4. What are appropriate goals for
an impact evaluation?
Measure impact on important outcomes
• Need a limited set of outcome indicators that are easy to
measure
Estimate the program‟s cost effectiveness
Explain which components of a program work best
Caution:
• Evaluations can only answer a limited number of questions
• Evaluations sometimes cannot explain what caused the
impacts
Effective monitoring and qualitative assessments help to
explain the context for impact evaluation results
5. Indicators for Monitoring and Evaluation
IMPACT
OUTPUTS
OUTCOMES
INPUTS
Effect on living standards
-better welfare impacts (e.g literacy, health)
- increase in participation, happiness
Financial and physical resources
- track resources used in the intervention
- e.g. budget support for local service delivery
Goods and services generated
- more local government services delivered
- e.g., textbooks, food delivered, roads built
Access, usage and satisfaction of users
- e.g. school attendance, vaccination rates,
- food consumption, number of mobile phones
EvaluationMonitoring
6. II. How do you design an
impact evaluation?
The central problem of impact evaluation
• Want to measure the impact of a program or
“treatment” on outcomes
• How do we know measured impacts are due to the
program?
• If we want to claim that the impacts observed are
causal, we need an „identification strategy‟—a way
to attribute the observed effects to the program
and not to other factors
7. II. How do you design an
impact evaluation?
Designing the impact evaluation
• Measure impact by comparing outcomes in households
exposed to the treatment to what those outcomes
would have been without that exposure—the
counterfactual
• Problem: you cannot observe the counterfactual
because program beneficiaries receive the treatment
• Need to construct a comparison group from
nonbeneficiaries
• Comparison group makes it possible to control for other
factors that affect the outcome
Ex: IFPRI evaluated the effect of Ethiopia‟s public works
(PSNP) on food consumption, but food prices rose at the
same time; use comparison group to remove the effect of
rising prices on food consumption in impact estimates
8. Suppose we observe an increase in outcome Y
for beneficiaries over time after an intervention
Y0
Y1
baseline(t0) follow-up(t1)
Intervention
(observed)
9. To measure impact, we need to remove the
counterfactual from the observed outcome
Y0
Y1
baseline(t0) follow-up(t1)
Intervention
(observed)
Y1
*
Impact=
Y1-Y1
*
(counterfactual)
Comparison
10. What You Can Miss Without a
Comparison Group
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
SFP THR CTR
% Round 1
Round 2
-3.4
13.9
-5.3
Impact:
SFP -19.2%
THR -17.2%
(*Anemic = hemoglobin<11g/dL)
Impact on School Feeding on Anemia Prevalence of Girls Age 10-13
11. Constructing a Comparison Group
Suppose we want to measure the impact of public works
on household food security (calorie consumption)
Q: Why not compare average calorie consumption of PW
beneficiaries to average calorie consumption of randomly
selected nonbeneficiaries?
A: On average, nonbeneficiaries are different from
beneficiaries in ways that make them an ineffective
comparison group
Need to correct for pre-program differences between
beneficiaries and nonbeneficiaries
• Beneficiaries are usually poorer; they also decided to participate
• If you don‟t control for this, impact estimates are biased
12. Impact Evaluation Methodologies
Ways of constructing a control or comparison group
Randomization
Matching (including propensity score matching,
covariate matching)
Regression discontinuity design (RDD)
Instrumental variables
Difference-in-differences
13. Impact Evaluation Methodologies
Randomization
• Randomly assign communities or households into treatment
and control groups before the program for the purpose of
evaluation
random assignment makes it likely that treatment and
control communities have identical characteristics on
average at baseline
for safety nets, usually randomize at the community
level
• Common approach: use phased rounds of program
implementation and randomly decide which communities
enter the program in each round
• Example of randomization from N. Uganda school feeding
study
14. Impact Evaluation Methodologies
Randomization
• How do you justify having a control group?
Justified if program cannot reach all communities at once
Some communities are always excluded
Main difference between control group and other
nonbeneficiaries is that you interview the control group
Ex: transparency in Nicaragua RPS evaluation. Randomization
done in public with media and politicians present
• There is consensus that a randomized out control group
provides the best estimate of counterfactual outcomes
Results of a good randomized evaluation will be convincing to
everyone: you have solid evidence of the impact of the
program
15. Impact Evaluation Methodologies
Matching
• Match beneficiary and nonbeneficiary households by
characteristics observed in a survey
• Estimate impact as the difference in weighted average
outcomes between beneficiaries and matched
nonbeneficiaries
• Propensity score matching matches households on
estimated probability of being in the program
• With matching, the quality of the evaluation depends
heavily on the quality of the data: not as convincing as
randomization
17. Impact Evaluation Methodologies
Many of the projects being presented here may be able
to rely on matching methods for their evaluation
• Need detailed data from the baseline or on variables
that change very little over time (adult education level)
Tips on Using Propensity Score Matching
• Need variables that are correlated with the outcome and
with the treatment
• Comparison households should come from the same
community as treated households if possible; otherwise
include many community-level variables
18. Impact Evaluation Methodologies
Regression Discontinuity Design (RDD)
If program eligibility is based on threshold for
some characteristic (e.g., poverty index),
compare outcomes for households just above
and just below the threshold
More useful for poverty programs targeted on
easily observable and measureable criteria
» poverty score, proxy means score, food insecurity
score
19. How RDD Measures Impact
Before start of the program
0510152025
20 25 30 35 40 45
Poverty Score
Pr(CompleteSecondarySchool)
20. How RDD Measures Impact
After the program
0510152025
20 25 30 35 40 45
Poverty Score
Pr(CompleteSecondarySchool)
beneficiariesnonbeneficiaries
21. How RDD Measures Impact
After the program
0510152025
20 25 30 35 40 45
Poverty Score
Pr(CompleteSecondarySchool)
beneficiariesnonbeneficiaries
IMPACT
22. Example of RDD from El Salvador
RPS Evaluation
Figure 4. Change in enrollment rate of 7-12 year olds from 2006-2007 by distance from
implied cluster threshold, 2006 and 2007 entry groups
Source: Impact Evaluation Survey Data, 2008
-.05
0
.05
.1
ChangeinEnrollmentRate
-10 -5 0 5 10 15
Distance to Cluster Threshold
2006 2007
23. Difference-in-Differences (DID)
Using any evaluation method, measure outcomes
before and after the program begins to obtain
“difference-in-differences” (DID) impact estimates
Impact = (T1-T0)-(C1-C0)
24. Cost Effectiveness
Comparisons of programs should focus on cost
effectiveness.
• Cost effectiveness is most relevant for policy: Which
program has the biggest impact per dollar spent?
• Impact evaluation methodology focuses on measuring
program benefits—one side of cost effectiveness.
Would need to add a cost study similar to
Caldés, Coady and Maluccio, IFPRI, 2004.