This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
The Basics of Monitoring, Evaluation and Supervision of Health Services in NepalDeepak Karki
This presentation has made to health workers who have more than two decades of experience of managing/implementing public health programs in Nepal, especially at district level and below.
Presentation by Lini Wollenberg, Low Emissions Development Leader, CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) at the Green Climate Fund Independent Evaluation Unit Learning-Oriented Real-Time Impact Assessment (LORTA)
Program Inception Workshop
July 24-26, 2018 Bangkok, Thailand
Errors Found in National Evaluation of UpwardBound- Postive Re-Analysis ResultsCHEARS
Presentation to Council for Opportunity in Education (COE) documents errors in National Evaluation of Upward Bound reports. Eight major errors are identified. Results summarized from re-analysis correcting for sampling and non-sampling errors that found strong positive impacts for the federal TRIO program.
Successful organizations are constantly monitoring, evaluating, and improving based off of their successes and failures. Learn how to design your own monitoring and evaluation program with this deck from WAN, and learn more on our free Strategic Advocacy Course, available at: http://worldanimal.net/our-programs/strategic-advocacy-course-new/about
This presentation is the continuation of the first part, which was all about the basics of program evaluation. This ppt contains slides describing the impact evaluation in details and also the logical framework is also explained with practical examples.
N.B: Please go through it, using slide view to use the animation effects.
This presentation has a vivid description of the basics of doing a program evaluation, with detailed explanation of the " Log Frame work " ( LFA) with practical example from the CLICS project. This presentation also includes the CDC framework for evaluation of program.
N.B: Kindly open the ppt in slide share mode to fully use all the animations wheresoever made.
The Basics of Monitoring, Evaluation and Supervision of Health Services in NepalDeepak Karki
This presentation has made to health workers who have more than two decades of experience of managing/implementing public health programs in Nepal, especially at district level and below.
Presentation by Lini Wollenberg, Low Emissions Development Leader, CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) at the Green Climate Fund Independent Evaluation Unit Learning-Oriented Real-Time Impact Assessment (LORTA)
Program Inception Workshop
July 24-26, 2018 Bangkok, Thailand
Errors Found in National Evaluation of UpwardBound- Postive Re-Analysis ResultsCHEARS
Presentation to Council for Opportunity in Education (COE) documents errors in National Evaluation of Upward Bound reports. Eight major errors are identified. Results summarized from re-analysis correcting for sampling and non-sampling errors that found strong positive impacts for the federal TRIO program.
Successful organizations are constantly monitoring, evaluating, and improving based off of their successes and failures. Learn how to design your own monitoring and evaluation program with this deck from WAN, and learn more on our free Strategic Advocacy Course, available at: http://worldanimal.net/our-programs/strategic-advocacy-course-new/about
This presentation is the continuation of the first part, which was all about the basics of program evaluation. This ppt contains slides describing the impact evaluation in details and also the logical framework is also explained with practical examples.
N.B: Please go through it, using slide view to use the animation effects.
Whole systems change across a neighbourhood
How can we collaborate with people to help them build their resilience? Get under the skin of the culture and the lives people live. Identify people’s feelings and experiences of community and understand what people think is shaped by different values and by the environment and infrastructure around them. The future of collaboration could bring many opportunities but people find it more difficult to live and act together than before. How can we help people…and communities build their resilience? Understand people’s different situations and capabilities to develop pathways that help them build resilient relationships. Help people experience and practice change together. Help people grow everyday practices into sustainable projects. Turn people’s everyday motivations into design principles. Support infrastructure that connects different cultures of collaboration. Build relationships with people designing in collaboration for the future…now.
valuation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much.
"Assessing Outcomes in CGIAR: Practical Approaches and Methods" training by Burt Perrin for CGIAR Evaluation Community of Practice (ECOP), 2nd annual workshop 2014
This presentation tackles the following information:
*Approaches to Program Evaluation
*Three Dimensions that Shape Point of View on Evaluation
*Doing Program Evaluation
*Program Components as Data Sources
Reference: The Elements of Language Curriculum (A Systematic Approach to Program Development) by James Dean Brown of University of Hawaii at Manoa
Reporters: Joy Anne R. Puazo & Marie Buena S. Bunsoy
Program: Bachelor in Secondary Education Major in English
Year: 4th
Instructor: Mrs. Yolanda D. Reyes
Subject: Language Curriculum for Secondary Schools
Planning is making current decisions in the light of their future effects.
Health planning is a process culminating in decisions regarding the future provisions of health facilities and services to meet health needs of the community.
Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow. A learning approach to evaluation looks to designing evaluation with learning in mind.
The Role of Federal Evaluation Activities in the “Evidence-Based” Policy and ...Nick Hart, Ph.D.
In early 2019, a new federal law called the Foundations for Evidence-Based Policymaking Act (Evidence Act) established new expectations for how federal agencies support and engage in evaluation activities. As part of the broader evidence movement, the new law recognizes relationships across traditional disciplinary silos and outlines and expectation that federal agencies will better collaborate to produce and use evaluations to inform key decisions. This webinar will describe the contemporaneous policy environment for federal evaluation, offer insights about the role in the broader evidence movement, and discuss opportunities for interacting with emerging data strategies and planning processes that affect evaluators. Participants will gain an appreciation for the nuances of federal evaluation policy activities, develop an understanding of how the activities relate to other data policy reforms, and be introduced to opportunities for ongoing participation from the evaluation community.
Trust and Accountability to Support Effective Policymaking. Highlights of the Commission on Evidence-based Policymaking's recommendations that pertain to transparency, and recommendations for a panel of the National Academies to consider. September 9, 2019
In this webinar, Nick will discuss recent research on policy evaluation and the use of environmental evidence within the U.S. Environmental Protection Agency (USEPA).
Since the inception of the USEPA, considerable emphasis has been placed on the use of policy analysis tools that aim to prospectively inform environmental policy decisions, including cost-benefit analysis and risk assessment used for regulatory actions. However, compared to the amount of such ex ante analysis conducted at the USEPA before a decision is reached, relatively little evaluation of these same environmental policies is produced after implementation to inform future policy development or to modify existing policies.
This original research applies accountability theory and organizational learning literature in order to identify and explain unique institutional factors that affect USEPA evaluation supply to inform future efforts.
Through a series of mixed methods case studies, this research seeks to inform efforts aimed at improving the quality of environmental evidence within the USEPA—through evaluation and systematic reviews—in order to better inform decision-making and achieve desired environmental outcomes.
Evaluators who conduct studies of Federal government programs routinely encounter challenges navigating various data collection protocols and processes, such as Information Collection Requests (ICR) required by the Paperwork Reduction Act (PRA). This presentation will provide contextual information on why processes like the PRA and ICRs exist, how information collection burdens have changed over time, and highlight challenges routinely encountered in navigating such processes. The presentation will then discuss how evaluators can work with the ICR process to achieve target production timeframes, work within existing agency processes, and address and integrate public comments, while improving the overall quality of an evaluation. Data collected from original interviews related to activities of the Environmental Protection Agency (EPA) will be presented in highlighting evaluation challenges and solutions used to address limitations.
Developing Better Environmental Evidence in the U.S.: Determinants of Evaluat...Nick Hart, Ph.D.
Since the inception of the U.S. Environmental Protection Agency (USEPA), considerable emphasis has been placed on the use of policy analysis tools that aim to prospectively inform environmental policy decisions, including cost-benefit analysis
and risk assessment used for regulatory actions. However, compared to the amount of such ex ante analysis conducted
at the USEPA before a decision is reached, relatively little evaluation of these same environmental policies is produced after implementation to inform future policy development or to modify existing policies. This original research applies accountability theory and organizational learning literature in order to identify and explain unique institutional factors that affect USEPA
evaluation supply to inform future efforts. Through a series of mixed methods case studies, this research seeks to inform
efforts aimed at improving the quality of environmental evidence within the USEPA—through evaluation and systematic reviews—in order to better inform decision-making and achieve desired environmental outcomes.
Determinants of Evaluation Supply at the US EPA: A Case Study of the RCRA Ha...Nick Hart, Ph.D.
What facilitators encourage the production of evaluation in EPA’s RCRA Program? What real and perceived barriers impede the production of ex media and ex post evaluation in RCRA?
Hart & Newcomer 2015 -- Change and Continuity: Lessons Learned from the Bush ...Nick Hart, Ph.D.
Calls for "evidence‐based policy" and for assessing how well government programs work have been around for many years. The George W. Bush and Barack Obama Administrations both espoused support for the generation and use of evidence to guide and improve government management. The two presidents brought very different professional experiences, political views, and policy advisors to the job as Chief Executive of the federal bureaucracy, yet their “President’s Management Agendas” established similar expectations about the use of evaluation and performance data. The paper outlines how the two Presidential Administrations centrally approached “evidence‐based policy” and “performance management,” with emphases on program evaluation and performance measurement, respectively. We highlight the many similarities across the Administrations, the interesting differences, and the intriguing ways in which some lessons that could have been “learned” were not.
Looking Forward to the Future of Federal Evaluation Efforts: Addressing Ident...Nick Hart, Ph.D.
The Obama Administration has repeatedly used the annual budget formulation process to establish new directions and policy direction for Federal government evaluations and evaluation offices. This presentation will discuss the suite of evaluation proposals offered in the FY 2016 Budget, including items for specific Federal agencies and initiatives intended to broadly apply to Federal evaluations. Specific areas to be discussed will include funding availability, funding flexibility, and information collection requests. The discussion will focus on challenges the proposals seek to address for the evaluation community, provide context for the Administration’s evaluation vision moving forward, and highlight policies the Budget does not address.
Using Evidence and Evaluation in Federal Budget DecisionsNick Hart, Ph.D.
The annual exercise of formulating the Federal budget typically takes at least a year – or longer – to run its full course. During that period of time, numerous actors and institutions within multiple branches of government consider and recommend funding levels for Federal programs. This panel will discuss how and to what extent key institutions and their respective processes incorporate available evidence, research, and program evaluations in reaching funding levels to recommend for Federal programs.
Hart and Newcomer -- Change, Continuity and Lessons Unlearned: A Tale of Two ...Nick Hart, Ph.D.
Calls for "evidence‐based policy" and for assessing how well government programs work have been around for many years. The George W. Bush and Barack Obama Administrations both espoused support for the generation and use of evidence to guide and improve government management. The two presidents brought very different professional experiences, political views, and policy advisors to the job as Chief Executive of the federal bureaucracy, yet their “President’s Management Agendas” established similar expectations about the use of evaluation and performance data. The paper outlines how the two Presidential Administrations centrally approached “evidence‐based policy” and “performance management,” with emphases on program evaluation and performance measurement, respectively. We highlight the many similarities across the Administrations, the interesting differences, and the intriguing ways in which some lessons that could have been “learned” were not. The role of the Office of Management and Budget (OMB) and pertinent laws are detailed, and the experience of the Federal executive agencies implementing the Administrations' agendas are summarized.
Hart 2014 -- Overseeing the Enforcers (Delivered on Nov. 6, 2014)Nick Hart, Ph.D.
The Environmental Protection Agency’s (EPA) enforcement priorities routinely come under criticism for the appearance of inappropriate targeting on the one hand, and the lack of targeting on the other. This study explores programmatic determinants of Federal oversight inspections conducted under the Resource Conservation and Recovery Act (RCRA), including how a state’s performance manifested through inspection intensity affects EPA’s decisions to investigate within a state jurisdiction. The models with year and regional fixed effects suggest that increases in state inspection intensity from the preceding year have a significant effect on reducing the rate of Federal inspections. A theoretical typology for enforcement activity is proposed for future consideration and modeling.
Understanding the Challenges of Street ChildrenSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
ZGB - The Role of Generative AI in Government transformation.pdfSaeed Al Dhaheri
This keynote was presented during the the 7th edition of the UAE Hackathon 2024. It highlights the role of AI and Generative AI in addressing government transformation to achieve zero government bureaucracy
Many ways to support street children.pptxSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Presentation by Jared Jageler, David Adler, Noelia Duchovny, and Evan Herrnstadt, analysts in CBO’s Microeconomic Studies and Health Analysis Divisions, at the Association of Environmental and Resource Economists Summer Conference.
This session provides a comprehensive overview of the latest updates to the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (commonly known as the Uniform Guidance) outlined in the 2 CFR 200.
With a focus on the 2024 revisions issued by the Office of Management and Budget (OMB), participants will gain insight into the key changes affecting federal grant recipients. The session will delve into critical regulatory updates, providing attendees with the knowledge and tools necessary to navigate and comply with the evolving landscape of federal grant management.
Learning Objectives:
- Understand the rationale behind the 2024 updates to the Uniform Guidance outlined in 2 CFR 200, and their implications for federal grant recipients.
- Identify the key changes and revisions introduced by the Office of Management and Budget (OMB) in the 2024 edition of 2 CFR 200.
- Gain proficiency in applying the updated regulations to ensure compliance with federal grant requirements and avoid potential audit findings.
- Develop strategies for effectively implementing the new guidelines within the grant management processes of their respective organizations, fostering efficiency and accountability in federal grant administration.
Canadian Immigration Tracker March 2024 - Key SlidesAndrew Griffith
Highlights
Permanent Residents decrease along with percentage of TR2PR decline to 52 percent of all Permanent Residents.
March asylum claim data not issued as of May 27 (unusually late). Irregular arrivals remain very small.
Study permit applications experiencing sharp decrease as a result of announced caps over 50 percent compared to February.
Citizenship numbers remain stable.
Slide 3 has the overall numbers and change.
Up the Ratios Bylaws - a Comprehensive Process of Our Organizationuptheratios
Up the Ratios is a non-profit organization dedicated to bridging the gap in STEM education for underprivileged students by providing free, high-quality learning opportunities in robotics and other STEM fields. Our mission is to empower the next generation of innovators, thinkers, and problem-solvers by offering a range of educational programs that foster curiosity, creativity, and critical thinking.
At Up the Ratios, we believe that every student, regardless of their socio-economic background, should have access to the tools and knowledge needed to succeed in today's technology-driven world. To achieve this, we host a variety of free classes, workshops, summer camps, and live lectures tailored to students from underserved communities. Our programs are designed to be engaging and hands-on, allowing students to explore the exciting world of robotics and STEM through practical, real-world applications.
Our free classes cover fundamental concepts in robotics, coding, and engineering, providing students with a strong foundation in these critical areas. Through our interactive workshops, students can dive deeper into specific topics, working on projects that challenge them to apply what they've learned and think creatively. Our summer camps offer an immersive experience where students can collaborate on larger projects, develop their teamwork skills, and gain confidence in their abilities.
In addition to our local programs, Up the Ratios is committed to making a global impact. We take donations of new and gently used robotics parts, which we then distribute to students and educational institutions in other countries. These donations help ensure that young learners worldwide have the resources they need to explore and excel in STEM fields. By supporting education in this way, we aim to nurture a global community of future leaders and innovators.
Our live lectures feature guest speakers from various STEM disciplines, including engineers, scientists, and industry professionals who share their knowledge and experiences with our students. These lectures provide valuable insights into potential career paths and inspire students to pursue their passions in STEM.
Up the Ratios relies on the generosity of donors and volunteers to continue our work. Contributions of time, expertise, and financial support are crucial to sustaining our programs and expanding our reach. Whether you're an individual passionate about education, a professional in the STEM field, or a company looking to give back to the community, there are many ways to get involved and make a difference.
We are proud of the positive impact we've had on the lives of countless students, many of whom have gone on to pursue higher education and careers in STEM. By providing these young minds with the tools and opportunities they need to succeed, we are not only changing their futures but also contributing to the advancement of technology and innovation on a broader scale.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Insights from Program Evaluation for Retrospective Reviews of regulations
1. 1
Insights from Program Evaluation
for Retrospective Evaluation of Regulations
Nicholas R. Hart
@nickrhart
PhD Candidate, Trachtenberg School of Public Policy and Public Administration
December 8, 2015
Society for Risk Analysis
Arlington, VA
THE GEORGE WASHINGTON UNIVERSITY
WASHINGTON, DC
2. Setting the Stage…
2Hart Evaluation Insights for Retrospective Reg Evaluation
• Program Evaluation: application of systematic analytical (social
science) methods to address questions about program
operations and results; includes:
– Performance Measurement & Monitoring: routine measurement of
program inputs, outputs, intermediate/ long-term outcomes attributed
to a program
• Both involve measurement + judgment
3. • Program Evaluation refers to both a mindset and the
application of analytical skills, and is a profession.
• 3 especially relevant elements of evaluation practice for
evaluating regulatory impact:
1. Articulate the Theory of Change
2. Frame Appropriate Evaluation Questions
3. Match Design to Evaluation Questions
Applying Program Evaluation to Regs
3Hart Evaluation Insights for Retrospective Reg Evaluation
4. Key Components
• Conceptual: explicit theory or model of how the program/policy is
expected to cause the intended outcomes
• Empirical: evaluation guided by the model.
Modeling the Theory of Change
• What elements constitute the “treatment (regulation)”?
• What are the desired outcomes – in the longer run and in the
shorter run?
• What is the expected process to produce desired outcomes?
• How are the program elements supposed to relate to one another?
• What are important bridging assumptions about how processes and
elements are supposed to relate to on another?
• What are contextual (or mediating) factors outside of the control of
program implementers that may affect the ability of the program or
policy to produce the desired outcomes?
#1: Articulate a Program or Policy’s
Theory of Change
4Hart Evaluation Insights for Retrospective Reg Evaluation
5. Basic Theory of Change
5Hart Evaluation Insights for Retrospective Reg Evaluation
Program
Elements
Mediators Outcomes
Moderators
6. Generic Logic Model
6Hart Evaluation Insights for Retrospective Reg Evaluation
Inputs
People
Money
Clients
Case-load
Processes
Activities
Initiatives
Procedures
Outputs
Services
Products
Outcomes
Impacts
Effects
Results
Intervening (Contextual) Variables
7. 7Evaluation Insights for Retrospective Reg ReviewsHart
Regulations
State
Administrative
Capacity
Number of
Permits
Reviewed
State Political
Will
State and
Federal
Enforcement
Public Funding
Enforcement
Intensity
Decreased
Absentee-
ism
Lower Levels
of Health
Problems
Higher
Educational
Achievement
Improved
Quality of Life
Environmental
Protection
Non-profit
Organization
Capacity
Lower Emissions
Lower Exposure to
Ambient Air
Pollution (Lead)
Lower
Levels of
Learning
Defects
Family
Education
and Income
Inputs
Mediating
Factors* Outputs
Short-Term
Outcomes
(~1 year)
Mediating
Factors*
Intermediate
Outcomes
(1-3 years)
Long Term
Outcomes
(4 + years)
Assessing the Impact of Environmental Regulations on Lead Emissions
Quality of
Education
(School
Level)
Geographic
and
Meteorological
Conditions
Non-Illness
Related
School
Absences
Individual
Test Taking
Skills
Reduced
Neurological Effects
Higher IQs
Higher Test
Scores
Longer Life
Expectancy* Mediating Factors are factors outside of the control of
government officials which, given variations in levels of
their intensity and scope, may positively or negatively
affect achievement of desired outcomes.
Reduced Blood
Lead Levels
Lower Ambient
Pollution
Concentrations
(Lead)
8. • Desired outcomes may be externally established
(e.g., by Congress)
• Increased complexity from intergovernmental
relations in implementation and enforcement (e.g.,
Federal, State, Local levels)
• Place requirements on private actors to achieve public
goals (compared to direct provision of services or programs)
• Behavioral responses to requirements affect outcomes
• Often national scope, relying on ex ante analysis with
focus on averages (instead of diversity of local conditions and
preferences)
Consider Distinctive Features of Regulation
8Hart Evaluation Insights for Retrospective Reg Evaluation
9. • Assumptions about Linkages -- e.g., reduced U.S. vehicle
emissions will reduce GHG concentrations, which will
reduce effects of anthropogenic climate change
• Behavioral Responses -- e.g., manufacturers meet standard
with smaller and/or alternate fuel vehicles; drivers drive
more miles when cost-per-mile is lower
• “Unobservables” -- e.g., consumers’ preferences for
different vehicle attributes
• External Forces -- e.g., consumer behavior affected by fuel
prices and economic conditions. GHG concentrations
affected by other countries’ emission patterns
Identify Key Contextual Factors
9Hart Evaluation Insights for Retrospective Reg Evaluation
10. • The type of question – and type of desired evaluation –
matter for determining appropriate methods:
– Descriptive Questions – describe input levels, outputs,
contextual variables and/or measurable outcomes
– Normative Questions – assess levels of compliance for
outputs or outcomes with criteria in law, regulation, or equity
norms
– Impact Questions – measures of effectiveness or “impact”
– Explanatory Questions – explain implementation (fidelity)
and processes
#2: Frame Useful Evaluation Questions
10Hart Evaluation Insights for Retrospective Reg Evaluation
12. Principles of Evaluation Design
1. Frame the most appropriate questions to address
2. Clear and answerable evaluation questions drive the design
decisions
3. Design decisions are made to provide appropriate data and
comparisons needed to address the evaluation questions
4. Decisions about measurement and design are made to bolster
the methodological integrity of the results
5. During design deliberations careful consideration should be
given to strengthening inferences about findings
6. Goal to report that evaluation design and reporting decisions
were characterized by strong methodological integrity
#3: Match Design to Research Questions
12Hart Evaluation Insights for Retrospective Reg Evaluation
13. Examples
Objective Illustrative Questions Possible Design
#1: Describe
program
(or regulatory)
activities
How extensive and costly are the regulatory enforcement
activities?
How do implementation efforts vary across sites,
beneficiaries, regions?
Has the regulation been implemented sufficiently to be
evaluated?
Performance Measurement
Exploratory Evaluations
Evaluability Assessments
Multiple Case Studies
#2: Probe
targeting &
implementation
How closely are the protocols implemented with fidelity
to the original design?
What key contextual factors are likely to affect intended
outcomes?
What feasibility or management challenges hinder
successful implementation of the regulation?
Multiple Case Studies
Implementation or Process
evaluations
Performance Audits
Compliance Audits
#3: Measure
impact of policy
or regulation
What are the average effects across different
implementations of the regulation?
Has implementation of the regulation produced results
consistent with its design (espoused purpose)?
Is the implementation strategy more (or less) effective in
relation to its costs?
Experimental Designs/RCTs
Non-experimental Designs:
Difference-in-difference,
Propensity score matching, etc.
Cost-effectiveness & Benefit
Cost Analysis
Systematic Reviews & Meta-
Analyses
#4 : Explain how/
why produce
(un)intended
effects
How/why did the regulation have the intended effects?
To what extent has implementation of the regulation had
important unanticipated negative spillover effects?
How likely is it that the regulation will have similar effects
in the future?
Impact Pathways and
Process tracing
System dynamics
Configurational analysis, etc.
14. • There are and will continue to be practical challenges and
impediments for regulatory evaluation:
– Lack of incentives
– Methodological challenges
• But regulations can be appropriately designed to overcome the
impediments to enable ex post program/policy evaluation
• Ultimately more attention to ex post regulatory evaluation will improve
the regulatory environment by:
– Helping to identify underperforming regulations
– Improving future regulations (and ex ante analysis)
• Program evaluation thinking and tools can be productively applied to
regulations
Conclusions
14Hart Evaluation Insights for Retrospective Reg Evaluation
17. Matching Designs to
the Evaluation Questions
Evaluation
Objective
Illustrative Questions Possible Design
#1 Describe
program (or
regulatory)
activities
Who does the regulation affect –
both targeted organizations and
affected populations?
What activities are needed to
implement the regulation? By
whom?
How extensive and costly are the
regulatory enforcement
activities?
How do implementation efforts
vary across delivery sites,
subgroups of beneficiaries,
and/or across geographical
regions?
Has the regulation been
implemented sufficiently to be
evaluated?
Performance Measurement
Exploratory Evaluations
Evaluability Assessments
Multiple Case Studies
17
18. Matching Designs to
the Evaluation Questions, cont.
Evaluation
Objective
Illustrative Questions Possible Design
#2 Probe
regulatory
implementation
and
targeting
To what extent has the regulation
been implemented?
When evidence-based regulations
are adopted, how closely are the
protocols implemented with fidelity
to the original design?
What key contextual factors are likely
to affect the ability of the regulatory
implementers to have the intended
outcomes?
What feasibility or management
challenges hinder successful
implementation of the regulation?
To what extent have activities
undertaken affected the populations
or organizations targeted by the
regulation?
To what extent are implementation
efforts in compliance with the law
and other pertinent regulations?
To what extent does current
regulatory targeting leave significant
needs (problems) not addressed?
Multiple Case Studies
Implementation or Process
evaluations
Performance Audits
Compliance Audits
18
19. Matching Designs to
the Evaluation Questions, cont.
Evaluation
Objective
Illustrative Questions Possible Design
#3 Measure
regulatory
impact
Has implementation of the
regulation produced results
consistent with its design (espoused
purpose)?
How have measured effects varied
across implementation approaches,
organizations, and/or jurisdictions?
For which targeted populations has
the regulation consistently failed to
show intended impact?
Is the implementation strategy
more (or less) effective in relation
to its costs?
Is the implementation strategy
more cost effective than other
implementation strategies also
addressing the same problem?
What are the average effects across
different implementations of the
regulation?
Experimental Designs, i.e.,
Random Control Trials
(RCTs)
Difference-in-difference
designs
Propensity score matching
(PSM)
Statistical adjustments with
Regression Estimates of
Effects
Multiple Time Series
Designs
Regression discontinuity
designs
Cost-effectiveness Studies
Benefit Cost Analysis
Systematic Reviews
Meta-Analyses
19
20. Matching Designs to
the Evaluation Questions, cont.
Evaluation
Objective
Illustrative Questions Possible Design
#4 Explain how
and why
programs
produce
intended and
unintended
effects
How and why did the regulation have
the intended effects?
Under what circumstances did the
regulation produce the desired effects?
To what extent has implementation of
the regulation had important
unanticipated negative spillover
effects?
What are unanticipated positive effects
of the regulation that emerge over
time, given the complex web of
interactions between the regulation
and other rules or policies, and who
benefits?
For whom (which targeted
organizations and/or populations) is
the regulation more likely to produce
the desired effects?
What is the likely impact trajectory of
the regulation (over time)?
How likely is it that the regulation will
have similar effects in other contexts
(beyond the context studied)?
How likely is it that the regulation will
have similar effects in the future?
Impact Pathways and
Process tracing
Contribution Analysis
Non-linear modeling,
system dynamics
Configurational analysis,
e.g., Qualitative Case
Analysis (QCA)
Realist-based synthesis
20
Editor's Notes
Evaluators bring a different way of thinking – and skill sets – to the question of how well programs are working
Related labels: theory of change, intervening mechanism evaluation, theory of action, theory-based evaluation, logical frameworks (log-frames) and program logic model.
Necessary to anticipate mediators
Unexpected moderators are what we don’t see coming
This logic model structure commonly used is oversimplified.
Logic model fatigue in many Federal agencies – and they often developed these in a way that was far too simple
If a consultant hands off a logic model within a couple days without consulting the program – you know something is wrong.
This perhaps drives some cynicism about the use of logic models
But if done properly, these can still be useful tools for program managers and staff
Must understand how the contextual factors affect so know what to change
This is an example of what such a theory of change might look like for considering reductions in ambient lead emissions
Look at the mediating factors – to be more systematic to affect longer-term outcomes
Benefits of modeling the theory of change:
Clarifies options for setting priorities on what to measure, where questions need to be addressed, and where to allocate resources.
Links program elements to objectives, indicators and data sources.
Helps to identify data needs.
When done as regulations are being developed, can lead to designs that permit data gathering and evaluation.
Recent work by Susan Dudley and the Regulatory Policy center at GWU identifies that regulations rarely have evaluation in mind when the rules are being promulgated
There are a number of distinctive features of regulations to consider – here are just a few that quickly come to mind:
Intergovernmental – must rely on other agents, which can add unexpected complexity
Requirements – not just the same as providing a service, must consider private sector reactions to goals intended to serve a public purpose
Behavior – may be room for game playing
Scope – not variation across context or state
Given the unique characteristics of regulations – also very important to consider those contextual factors
Here are a few examples of what I mean
Assumptions about linkages – often a presumption of attribution that is difficult to attain in practice but important to acknowledge
Behavior – often unexpected responses when a certain action is taken
Unobservable individual preferences
Other forces like broader economic conditions that can affect emission patterns
With a robust theory of change that considers contextual factors and unique conditions of regulations – can move to framing evaluation questions
There are different questions often of interest to policy-makers, and they are often well beyond does this work or not.
Knowing the appropriate evaluation question will help determine how to evaluate
Consider this basic theory of change along the middle of this chart – as we move from left to right the types of questions evolve from How to Why
Each of those requires different types of evaluation and build up to impact evaluation on the right side
Note that these can be completed in conjunction with each other and thought of as a suite of evaluations, which may be just what’s needed to address all o fthe questions policymakers really have.
With the theory of change articulated and appropriate questions developed, the evaluation questions are then matched to an appropriate research design
In the evaluation field, we talk about a handful of principles for evaluation design:
Frame the appropriate question
The question drives the design
Any design decisions facilitate appropriate comparison s and data collection
Design decisions are intended to boost methodological integrity of eventual results and to strengthen the inferences about findings
At the end of the project, should be able to report that design decisions ultimately resulted in strong methodological integrity