Hart & Newcomer 2015 -- Change and Continuity: Lessons Learned from the Bush and Obama Administrations' Experiences with Evaluation and Performance Measurement
This document summarizes and compares the Bush and Obama administrations' initiatives around evaluation and performance measurement in government. Both administrations supported improving accountability through initiatives like the Program Assessment Rating Tool (PART) under Bush and cross-agency priority goals under Obama. However, the initiatives faced challenges engaging audiences, ensuring evaluation capacity, and clearly linking performance measurement to evaluation. The presentation identifies lessons for future administrations, such as calibrating OMB's role, generating success stories, and institutionalizing relationships between performance and evaluation functions.
Similar to Hart & Newcomer 2015 -- Change and Continuity: Lessons Learned from the Bush and Obama Administrations' Experiences with Evaluation and Performance Measurement
Similar to Hart & Newcomer 2015 -- Change and Continuity: Lessons Learned from the Bush and Obama Administrations' Experiences with Evaluation and Performance Measurement (20)
Financing strategies for adaptation. Presentation for CANCC
Hart & Newcomer 2015 -- Change and Continuity: Lessons Learned from the Bush and Obama Administrations' Experiences with Evaluation and Performance Measurement
1. 1
Change and Continuity:
Lessons Learned from the Bush and Obama Administrations'
Experiences with Evaluation and Performance Measurement
Nicholas R. Hart
PhD Candidate, Trachtenberg School of Public Policy and Public Administration
Kathryn E. Newcomer
Director, Trachtenberg School of Public Policy and Public Administration
November 12, 2015
American Evaluation Association
Evaluation 2015 in Chicago, IL
THE GEORGE WASHINGTON UNIVERSITY
WASHINGTON, DC
2. Presentation Overview
• Provide brief survey of Bush and Obama
Administration evaluation and performance
measurement initiatives
• Highlight select similarities and differences
• Discuss lessons (un)learned
2Change and ContinuityHart and Newcomer
3. Key Features of Bush Initiatives
• President’s Management Agenda (PMA)
– emphasis on human capital, competitive
sourcing, electronic government,
integrating budget and performance
3Hart and Newcomer
– Established Performance Improvement Officer (PIO)
and Performance Improvement Council (PIC)
• Program Assessment Rating Tool (PART)
– Questions on performance goals, comparison to
similar programs, and effectiveness
– Questions about independent evaluation
Change and Continuity
5. Key Features of Obama Initiatives
• Evaluation Capacity and Barriers
– Transparency and Tiered Evidence
– Requests for Funding
– Chief Evaluation Officers
• Management Agenda
– Emphasis on customer service, shared service delivery,
open data, IT delivery, strategic sourcing, financial
management, real property
– High Priority Goals and Cross Agency Goals
– Quarterly Reviews/Strategic Reviews
5Hart and Newcomer Change and Continuity
7. Major Similarities
• Espoused support to delivering better results and improving
accountability but the audiences for initiatives were not clear
• PMAs for both Administrations targeted management improvements
in similar areas (service delivery, IT, contracting, real property, etc.)
• Evaluation focus was more on Randomized Control Trials and Impact
Evaluation
• OMB took the lead on performance management and served as
corridor for implementation guidance
• Both espoused need for Chief Performance Improvement Officers but
did not ensure the designated PIOs had time to devote to the
function
• Neither emphasized congressional stakeholder engagement
• Neither effectively stressed linkages between performance
measurement and evaluation
7Hart and Newcomer Change and Continuity
8. Major Differences
• More emphasis in Obama initiatives on agency flexibility
for management agenda
• Some of the Obama efforts were “institutionalized” in the
GPRA Modernization Act of 2010
• Obama voiced support for increasing evaluation funding
and reducing barriers –primarily in HHS, ED, and Labor.
• Bush efforts used OMB as police instead of a coach -- relied
on OMB to help establish “stretch goals” and coordinate
overall implementation
• Obama efforts faced delays in launching, whereas Bush
efforts had specified purpose and action relatively quickly
8Hart and Newcomer Change and Continuity
9. Lessons (un)Learned
1. Calibrate OMB role with agency needs
2. Establish and maintain audience attention
3. Effectively implement initiatives, with
appropriate cross-agency collaboration
4. Generate and highlight success stories
5. Build sufficient evaluation capacity to
support initiatives
9Hart and Newcomer Change and Continuity
10. Lessons (un)Learned
6. Institutionalize relationships between
performance measurement and evaluation
staffs and offices
7. Provide training for senior managers and
political appointees about leadership roles in
performance measurement and evaluation
8. Consult Congressional staff and committees
on implementation and demand for use
10Hart and Newcomer Change and Continuity
[Intro]
Interest in looking at 2 administrations – from different parties – to consider similarities and differences in certain aspects of the management agenda.
These management agendas are a key feature for how modern presidencies seek to centrally improve government performance.
Goal to identify some lessons moving forward as they relate to evaluation and performance measurement.
Paper has more details on survey and similarities/differences. Happy to share.
Conclude with lessons that could have been learned, but at least so far do not appear to have been.
Under the Bush PMA, focus on wide range of management activities.
Sought to improve performance and coordination through the creation of the PIC near the end of the term
Perhaps best known for PART – a 25-30 question tool designed to incorporate performance information in to budget decisions – one of the PMA goals
Variety of questions on performance, whether independent evaluation had been conducted
Agencies self-assessed with documentation then ratings -- and even Definitions of programs -- negotiated between agencies and OMB
Scores made publicly available
Example of USDA
The labels here suggest one obvious reason PART was contentious for some programs
Administration ultimately produced over 1,000 PARTs
Through the process created some encouragement to evaluate more and to create new performance measures (and efficiency measures)
Evaluation
Obama efforts had a fairly substantial push specific to improving the production and use of evaluations
Early efforts focused on providing lists of completed evaluations on websites – although some agencies already did so under the Bush admin
Added in what were called tiered evidence structures for certain types of grants – essentially award preference points for higher “quality” evidence
Lots of requests for funding – but limited provision by congress. Focus on set-asides and targeted evaluations.
Addition of eval officers – CNCS, CDC, and DOL
Mgmt.
On a slightly separate track was the management agenda.
Like Bush admin, focus on discrete management areas – human capital, open data
Created priority goals to focus on short-term areas of interest to political leadership.
Originally the administration created then later mandated by GPRA Modernization
Revised every 2 years, align with agency strategic plans and timing allows new administrations to revise
Includes agency goals and cross agency goals
Performance and management info is now reviewed and discussed with agency senior leadership and OMB’s DDM periodically.
Like the PART information, priority goals are posted on a new website – Performance.gov.
General
Focus on targeted activities to improve performance and public dissemination of some information for public accountability
Targeted similar management areas – with the goal of improving performance in key areas
Both sought to encourage evaluation – especially impact evaluations
Process:
OMB served as central conduit for implementation
While focus on accountability, little congressional outreach or engagement. Raises questions about who uses this.
Measurement and evaluation activities rarely integrated – different operations at OMB and within most agencies.
There are trade-offs in uniformity and flexibility for agency implementation of these initiatives – the Obama team leaned toward more flexibility on the tails of complaints about the uniformity of the PART
Relates to fourth bullet – OMB used as police/enforcement force under Bush to constantly set more stringent targets; likely some activities today where agencies set goals they know they will achieve
Bush efforts largely administrative (and therefore end-able) whereas Obama efforts in law
Much greater emphasis on evaluation funding in Obama team – but largely targeted at HHS, ED, and LABOR
#1 – OMB has an institutional role to implement the president’s agenda and there have been struggles with getting agencies to own processes. Calibrating these roles to ensure ownership is essential.
#2 – Once initiatives are launched they can sometimes be linked to specific personalities and therefore have short lifespans.
#3 – cross agency collaboration is sorely needed. Obama admin took steps here, acknowledging that resources are needed to better coordinate cross agency goals.
#4 – success stories of what worked can help avoid skeptical agency and OMB audiences
#5 – adequate evaluation capacity outside of EIML can help support the initiatives over the long term – including focus that avoids alienating agencies without experience with RCTs
#6 – bureaucratic inertia and cultural norms have shown its difficult to connect evaluation and performance measurement staffs.
#7 – train and socialize senior career staff and new political appointees about roles
#8 – consult with congress