Call Girls In Rohini ꧁❤ 🔝 9953056974🔝❤꧂ Escort ServiCe
Performance Partnership Case Presentation: Evaluation @EPA
1. 1
Determinants of Evaluation Supply at the
U.S. Environmental Protection Agency (EPA):
A Case Study of the National Environmental
Performance Partnership System (NEPPS)
Nicholas Hart, PhD
@NickRHart
September 28, 2016
European Evaluation Society
#EES2016
Maastricht, Netherlands
THE GEORGE WASHINGTON UNIVERSITY
WASHINGTON, DC
2. USEPA Programs and Regulated Entities
• Mission to protect human health and the
environment
• Implement suite of Federal environmental statutes
with emphasis on air, land, water, and chemicals
through Federal-State partnership
• 100+ programs (as defined by projects in the EPA budget)
– $9 billion per year in appropriated funds, plus $7 billion
in state funds
• 800,000+ regulated facilities in U.S.
– Environmental regulations est. to cost ~$330 billion annually
2@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
3. Evidence Status Quo at USEPA
• USEPA conducts extensive ex ante or prospective analysis during
rulemakings (~1,650 over past 15 years)
– Cost Benefit Analysis (regulatory impact assessments)
– Health Risk Assessments
• A robust scientific research network to support regulatory actions
• Relatively little ex media or ex post (or retrospective) evaluation
– EPA sponsored about 70 evaluations over the past 15 years, including 3
impact evaluations
– Evaluated programs often not the same as those with prospective
analyses
3@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
4. Overview of Evaluation Capacity at USEPA
• Limited capacity at USEPA today to conduct evaluation,
notwithstanding a strong performance monitoring culture and
emphasis on process studies
• No existing agency-wide mandate to conduct program evaluations
– When regulatory reviews are required by law, rarely completed
• Question: Why the gap in capacity?
– Specifically, what are the barriers and facilitators of production?
4@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
5. Case Study Approach for Larger Body of Research
• Case Studies of USEPA programs: hazardous waste, air quality, and performance
partnerships
– EPA is a large, diverse agency; case studies allowed for understanding programs
and cultures across agency
– Allows for in-depth consideration of processes, factors, and implications of
evaluation with specific EPA programs
– Included one program with extensive evaluation experience, and two programs
with more limited experience
• Data:
–57 semi-structured interviews with state and Federal officials, and other
stakeholders
–Every evaluation completed by respective programs, as well as relevant laws and
evaluation policies
5@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
6. Performance Partnership History & Context
• Context: Rose from need to address multi-generational and multi-media environmental
conditions; and frustration from states in their ability to prioritize strategies
• Authorization: In 1995 Congress allowed USEPA to provide states flexibility in
implementing delegated Federal programs, including in the allocation of grant funds
• Identified Program Goals: target jointly identified environmental improvements that
vary by geography and time
• Political Dynamics: historically politically supported by Democrats and Republicans;
low political controversy
• Program Design: regions negotiate with states to implement priorities that align
between EPA and states, and to prioritize how funds are spent.
6@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
7. Performance Partnerships Participation
7@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
• But participation depends
on state government
political conditions &
other contextual factors
– States with higher
emissions less likely to
rely on the mechanism
– States receiving larger
share of environmental
budget from Federal
government more
likely to participate
8. NEPPS Evaluability: Joint Evaluations
• Program regulations direct states to complete evaluations in
partnership with USEPA each year
– No program guidance on how to develop, so much variation across jurisdictions
• In most states these are reports of performance metrics submitted to
EPA, not evaluation per se
• Interviewed states describe as a compliance activity, not meaningful
evaluations for learning or accountability
– Not a central repository of the “evaluations” for public access
8@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
9. NEPPS Evaluability: Formal Evaluations
• Completed NEPPS Program Evaluations: only 3 in 20 years
– (excludes the annual “joint evaluations”)
• Example:
– 2013 NEPPS Implementation Study
• Staff committed to conducting an evaluation, without new resources
• Interested in learning about aspects of program implementation across regions
• Relied on existing in-house expertise
• According to one interviewee, it was just “good timing”
• Did not lead to routinized evaluation – hampered by funding structures, among other
factors
9@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
10. NEPPS Evaluability: Outcomes Analysis
• No outcome analyses
produced by USEPA,
even though possible
• For example, Hart
(2016) looks at water
quality outcomes for
participating states
that prioritized WQ
improvements
10@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
11. PPG Evaluation Capacity Inventory
Criteria Finding Explanation
1. Is the program
significant enough
to merit evaluation?
Yes
Performance Partnerships represent a sizeable contribution to the EPA's implementation of Federal programs and are
associated with about $1 billion in annual Federal spending, though only about 60 percent of eligible funds are applied
toward PPGs.
2. Are the program
goals clear?
No
The program is clearly designed to provide flexibility and prioritization of efforts within states, and NEPPS is intended to
produce unspecified improvements in environmental outcomes. Some disagreement regarding goal prioritization was
evident in interviews for this research. Consistent programmatic goals in driving changes in targeted outcomes are not
readily available because, in practice, such efforts vary by state. The lack of consistent goals, and the program's absence of a
specific intended outcome, limits the ability to produce evaluations.
3. Can the results of
evaluation influence
decisions about the
program?
Yes
NEPPS program managers have demonstrated a willingness to consider information from prior evaluations completed by
EPA and external stakeholders initiating dialogues with states. Involvement from ECOS and states in the program's design
suggests a willingness to continue improving, where feasible, without imposing undue administrative burden on state staff.
NEPPS staff also demonstrated an ability to be self-critical in executing a process evaluation, identifying constructive
strategies to improve program processes and interactions as a result.
4. Are intended
evaluation users and
uses well-defined?
Yes
Program stakeholders and participants are clear, and the audience for potential evaluation includes national headquarters
staff, regional program managers, ECOS, and other state participants. Program criteria are largely established in
administrative or regulatory documents, suggesting a largely executive audience for evaluation.
5. Can an evaluation
technically be
completed?
Yes
Certain types of evaluation can be readily completed. Program staff have demonstrated an ability and willingness to
conduct self-critical process evaluations. However, impact or outcome evaluation will require further preparation to
develop appropriate longitudinal datasets to assess program investments and outcomes over time. Evaluation of this
program must be able to determine what is being evaluated, and malleable goals do not lend themselves well to establishing
a counterfactual.
11@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
12. Future Evaluation Opportunities for the
USEPA Partnerships
1. Clarify Program Design Features that Support Evaluation
2. Strengthen “Joint Evaluation” Protocols
3. Develop Systems for Outcome Analysis
12@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
13. Lessons for Other Performance Partnership Efforts
1. Building evaluation into program design needs curation – it cannot be static
-- with appropriate guidelines and expectations established
2. Clearly delineating goals is critical for a common understanding of program
purpose and measures
– E.g., “Flexibility” not always achieved in practice
3. Multi-jurisdictional partnerships provide challenges for implementation, but
also numerous evaluation opportunities
4. Need to understand potential winners/losers in a
zero-sum partnership environment; can’t say no losers
– E.g., tensions between Federal programs “losing control”
– E.g., shifting funds from one grant program to another means actions not funded in
one to support another
13@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
14. Evaluation Capacity Factors at USEPA
• Findings from the Broader Research:
– 10 key capacity factors, observed across each of the cases
• each comprised of a cluster of multiple themes and nodes
• Largely context-dependent
– Factors typically identified as both barriers and facilitators
• Barriers observed to limit activities, whereas facilitators come
together with “evaluation window” for production to occur
– e.g., NEPPS implementation study
• Meaning, evaluation capacity at USEPA appears to occur on a
spectrum, rather than in a dichotomy
14@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
15. Emergent Capacity Framework for USEPA
UtilityTier1:ImpetusTier2:
Technical
Outcomes
DesignMotivation
Leadership
Resources
Methods
Data
Legal
Cultural
Context
Political
Context
15@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
16. Implications For Practice
• Evaluation is possible at EPA, even though not widespread –
including for Performance Partnerships
• No one-size-fits-all design identified for establishing
capacity in EPA programs
• Some factors may be more productively targeted at EPA to
improve capacity moving forward, in particular in “Tier 1” –
traditional factors emphasized in government activities may
not be the best place to start
16@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
17. Nick Hart
George Washington University
Washington, D.C.
nick.r.hart@gmail.com
@NickRHart
Thank You!
17@NICKRHART Determinants of Evaluation Supply at the EPA: PPG Case Study
Editor's Notes
First, a bit of background about EPA.
USEPA is one of the premiere environmental regulatory bodies in the world. In the US, we task the EPA with protecting human health and the environment through a suite of laws.
EPA considers that they have over 100 individual programs – though really many more – and these programs are jointly implemented by the Federal government and each of the 50 states (as well as tribes and territories)
The evidence status quo at EPA is really interesting.
A vast amount of information is utilized to guide policy decisions at the outset
From 1998-2013 ~200 RIAs (cost-benefit analyses) and ~1450 Ecological and Human Health Risk Assessments (RegInfo.gov and Regulations.gov)
But relatively little effort is made to analyze those same regulations and policies over time with program or policy evaluation
The key question I set out to answer – why is there a gap in evaluation capacity relative to prospective analysis?
Is it really just as simple as one is required while the other is not? -- The answer is no, which I’ll come back to.
Case Selection Criteria:
Implement national-scale EPA policy
Well-established programs in operating stage
Joint implementation between EPA and state partners
What was considered an evaluation?? DEFINE
EPA produced or supported
Social science method
Highly complex regulations, developed in areas where pre-existing knowledge about the science and technology was limited until regulatory development
So do they produce evaluations – yes, some. Largely process.
So do they produce evaluations – yes, some. Largely process.
Lack a common understanding of what evaluation is – need to work with common definition – and goals – in order to facilitate eluation
Performance reviews exist, expectations about what those reviews are vary, and so does dissemination
If interested in evaluation of outcomes, need better systems to facilitate. Outcome analysis by Hart required linking multiple datasets from differentn parts of the agency
Partnerships are a popular tool, but must be placed in context – they are not a panacea for fixing cross-jurisdictional challenges – and may not provide the desired level of flexibility
TK
While much emphasis on technical issues; not sole or key issues at an institutional level
Note that cultural and political context interact with the rest
this framework varies from existing research by suggesting that capacity not only operates within a contextual dimension, but also that some aspects of evaluation can be so negative as to stop capacity for supply in particular areas altogether.
Introduce “evaluation windows”