GPRA Modernization Act of 2010:
Potential Legislative Perspectives on
Evaluation, Measurement, and Analysis
Clint Brass, Analyst in Government Organization
and Management, cbrass@crs.loc.gov
September 25, 2012 -- Washington Evaluators Brownbag
Outline of discussion
• Potential legislative perspectives on
government performance
• Framework of GPRA Modernization Act
• Some threshold issues for practitioners and
users
• Potential frameworks for evaluation,
measurement, and analysis
• Concluding observations

CRS-2
Potential legislative perspectives
on government performance

CRS-3
Institutional context
• Through public law and some non-statutory means,
Congress may
• Give agencies their missions
• Specify their work processes and organization
• Provide and allocate their resources
• Determine or influence their priorities

• Congress may cooperate or compete with the
President to influence how agencies formulate and
implement policy
• In practice, agencies may operate with more or less
policy and political autonomy

CRS-4
Congress and government
performance: at least two major roles
• Using policy analysis, evaluation, and performance
measurement in specific contexts, to inform
• Thinking
• Oversight
• Policy making

• Establishing and modifying performance-related policies
•
•
•
•

Processes (e.g., GPRA, evaluation, planning, reporting)
Institutions (e.g., positions and organizations)
What constitutes “evidence”
Addressing needs of multiple stakeholders, for their use:
•
•
•
•

Congress (committees and Members)
Agency personnel
President
Public (stakeholders and individual citizens)

CRS-5
Congressional use of information and
analysis: pathways and brokers

(indirect)

“Administration”
• President
• OMB
Agencies
• Departmental heads
• Bureau heads
• Evaluation offices
• Budget offices

Source: adapted from
Brass (2011).

Nonfederal “brokers”
• Academia
• Think tanks
• Advocacy groups
• Lobbyists
• The public
• Nat. Acad. of Sciences*
(direct)

(indirect)

Federal “brokers”
• GAO
• CBO
• CRS
• Inspectors General

Use?
• Thinking
• Oversight
• Policy making
Congress
• Authorizing comtes.
• Appropriations comtes.
• Oversight comtes.
• Budget comtes.
• Members and informal 
caucuses
Why brokers?
• “Satisficing”
• Synthesis
• Credibility

*The National Academy of Sciences is a private 501(c)(3) corporation that receives the majority of its funding from government contracts.
CRS-6
Framework of GPRA
Modernization Act

CRS-7
GPRA Modernization Act: comparison
with GPRA 1993 (slide 1 of 2)
• Continues three agency-level plans and reports
(“products”) from Government Performance and
Results Act of 1993 (GPRA 1993), but with
changes
• Establishes new products and processes that
focus primarily on goal-setting and performance
measurement in policy areas that cut across
agencies
• Brings attention to using goals and
measurements during policy implementation
• Increases Web-based reporting
CRS-8
GPRA Modernization Act: comparison
with GPRA 1993 (slide 2 of 2)
• Requires individuals to be responsible for some
goals and management tasks
• Aligns timing of many products to coincide with
presidential terms and budget proposals
• Includes more central roles for the Office of
Management and Budget (OMB)
• Establishes more specific requirements for
congressional consultations
• Continues emphasis on goal-setting and
performance measurement… along with
opportunities for, but little explicit emphasis on,
program evaluation
CRS-9
Timeline for implementation:
requirements and deadlines

Source: CRS.

CRS-10
Illustrative relationships among
contents of products and processes

Source: CRS.

CRS-11
Some threshold issues for
practitioners and users

CRS-12
Different “tribes”, jargons, and
emphases
Among practitioners
• Performance measurement vs.
program evaluation
• Impact evaluation vs. other
evaluation types (e.g., qualitative,
outcome, process)
• Randomized controlled trials (RCTs)
vs. other impact evaluation types
• Summative vs. formative evaluation
• Policy analysis (often prospective)
vs. evaluation and measurement
(often retrospective)

Source: CRS.

Among and outside of practitioners
• Budgeteers, OMB, agency managers,
evaluators, performance measurers,
appropriators, authorizing
committees, government operations
committees
• Different skill sets, schedule
orientations (budgeteers vs.
managers), priorities (summative
versus formative), and interests
CRS-13
Tools: some key distinctions
• Program evaluation: use of one or more
formal methods to assess how, and the extent to
which, programs or policies achieve intended
objectives or cause unintended consequences
(evaluation may be ongoing activity or discrete
study)
• Performance measurement: periodic counting
of data related to programs or policies, which
typically does not account for “external factors”
• Policy analysis: typically prospective, drawing
on the above and other analytical methods like
forecasting, risk assessment, theory, logic, etc.
CRS-14
Example of distinction between
evaluation and measurement: impact
evaluation

Source: CRS.
CRS-15
Defining “success” and “performance”
(slide 1 of 2)
• Definition of “success” or “performance” is often
politically contested for the same program
• Many statutes do not specify goals or purposes in detail
• There may be trade-offs among potentially competing
values (efficiency, effectiveness, fairness, service, etc.)
• Multiple audiences bring their own perspectives and
informational needs
• Agency program staff
• Agency leaders
• Congress
• President and OMB
• Service delivery partners
• Non-federal stakeholders
• The public
CRS-16
Defining “success” and “performance”
(slide 2 of 2)
• Unit of analysis: multiple angles on performance,
broken down by…
•
•
•
•
•
•
•
•
•
•
•

Agency
Program
Policy
Strategy
Activity (mission and mission-support)
Goal
Outcome (end outcome and intermediate outcome)
Output
Metric, measure, or indicator
Clientele
Groups of any of the above

• Multiple potential research questions and
corresponding methods of evaluation, analysis, and
measurement
CRS-17
Thinking about “performance”:
organize by program, goal, or
something else?

Source: CRS.

CRS-18
Potential frameworks for evaluation,
measurement, and analysis

CRS-19
How a policy may work: logic models

Source: adapted from Hatry (2006).

CRS-20
Looking across programs and
agencies
Source: CRS.

CRS-21
Potential for perverse incentives

Source: adapted from Fisher,
Schoenfeldt, and Shaw (2006).
CRS-22
Concluding observations

CRS-23
“Evidence” and policy
• What is “evidence”? In practice…

• Retrospective (e.g., evaluations, measurements, evaluation
syntheses)
• Prospective (e.g., policy analysis tools)
• Current-day (e.g., values, ethics, risk preference)

• What constitutes “use” of evidence? Arguably, when evidence
informs…
• Thinking
• Oversight and monitoring
• Policy making

• What makes “evidence” and its presentation appear credible?
Assessments for credibility may look for…
• Appropriate methods (often, multiple methods)
• Definition(s) of success
• Fair representations about performance

CRS-24
Some potential issues for Congress
• Congressional consultations and defining “success”
• Agency and OMB representations about
performance
• Oversight, transparency, and public participation
• Crosscutting policy areas
• Design and implementation of the law
• Serving Congress’s needs, agencies’ needs?
• Promoting both improvement and accountability?
• Do agencies have the necessary capacity—staff, skills,
technology, funding—to implement the law?
CRS-25
Questions?

CRS-26

GPRA Modernization Act of 2010: Potential Legislative Perspectives on Evaluation, Measurement, and Analysis

  • 1.
    GPRA Modernization Actof 2010: Potential Legislative Perspectives on Evaluation, Measurement, and Analysis Clint Brass, Analyst in Government Organization and Management, cbrass@crs.loc.gov September 25, 2012 -- Washington Evaluators Brownbag
  • 2.
    Outline of discussion •Potential legislative perspectives on government performance • Framework of GPRA Modernization Act • Some threshold issues for practitioners and users • Potential frameworks for evaluation, measurement, and analysis • Concluding observations CRS-2
  • 3.
    Potential legislative perspectives ongovernment performance CRS-3
  • 4.
    Institutional context • Throughpublic law and some non-statutory means, Congress may • Give agencies their missions • Specify their work processes and organization • Provide and allocate their resources • Determine or influence their priorities • Congress may cooperate or compete with the President to influence how agencies formulate and implement policy • In practice, agencies may operate with more or less policy and political autonomy CRS-4
  • 5.
    Congress and government performance:at least two major roles • Using policy analysis, evaluation, and performance measurement in specific contexts, to inform • Thinking • Oversight • Policy making • Establishing and modifying performance-related policies • • • • Processes (e.g., GPRA, evaluation, planning, reporting) Institutions (e.g., positions and organizations) What constitutes “evidence” Addressing needs of multiple stakeholders, for their use: • • • • Congress (committees and Members) Agency personnel President Public (stakeholders and individual citizens) CRS-5
  • 6.
    Congressional use ofinformation and analysis: pathways and brokers (indirect) “Administration” • President • OMB Agencies • Departmental heads • Bureau heads • Evaluation offices • Budget offices Source: adapted from Brass (2011). Nonfederal “brokers” • Academia • Think tanks • Advocacy groups • Lobbyists • The public • Nat. Acad. of Sciences* (direct) (indirect) Federal “brokers” • GAO • CBO • CRS • Inspectors General Use? • Thinking • Oversight • Policy making Congress • Authorizing comtes. • Appropriations comtes. • Oversight comtes. • Budget comtes. • Members and informal  caucuses Why brokers? • “Satisficing” • Synthesis • Credibility *The National Academy of Sciences is a private 501(c)(3) corporation that receives the majority of its funding from government contracts. CRS-6
  • 7.
  • 8.
    GPRA Modernization Act:comparison with GPRA 1993 (slide 1 of 2) • Continues three agency-level plans and reports (“products”) from Government Performance and Results Act of 1993 (GPRA 1993), but with changes • Establishes new products and processes that focus primarily on goal-setting and performance measurement in policy areas that cut across agencies • Brings attention to using goals and measurements during policy implementation • Increases Web-based reporting CRS-8
  • 9.
    GPRA Modernization Act:comparison with GPRA 1993 (slide 2 of 2) • Requires individuals to be responsible for some goals and management tasks • Aligns timing of many products to coincide with presidential terms and budget proposals • Includes more central roles for the Office of Management and Budget (OMB) • Establishes more specific requirements for congressional consultations • Continues emphasis on goal-setting and performance measurement… along with opportunities for, but little explicit emphasis on, program evaluation CRS-9
  • 10.
    Timeline for implementation: requirementsand deadlines Source: CRS. CRS-10
  • 11.
    Illustrative relationships among contentsof products and processes Source: CRS. CRS-11
  • 12.
    Some threshold issuesfor practitioners and users CRS-12
  • 13.
    Different “tribes”, jargons,and emphases Among practitioners • Performance measurement vs. program evaluation • Impact evaluation vs. other evaluation types (e.g., qualitative, outcome, process) • Randomized controlled trials (RCTs) vs. other impact evaluation types • Summative vs. formative evaluation • Policy analysis (often prospective) vs. evaluation and measurement (often retrospective) Source: CRS. Among and outside of practitioners • Budgeteers, OMB, agency managers, evaluators, performance measurers, appropriators, authorizing committees, government operations committees • Different skill sets, schedule orientations (budgeteers vs. managers), priorities (summative versus formative), and interests CRS-13
  • 14.
    Tools: some keydistinctions • Program evaluation: use of one or more formal methods to assess how, and the extent to which, programs or policies achieve intended objectives or cause unintended consequences (evaluation may be ongoing activity or discrete study) • Performance measurement: periodic counting of data related to programs or policies, which typically does not account for “external factors” • Policy analysis: typically prospective, drawing on the above and other analytical methods like forecasting, risk assessment, theory, logic, etc. CRS-14
  • 15.
    Example of distinctionbetween evaluation and measurement: impact evaluation Source: CRS. CRS-15
  • 16.
    Defining “success” and“performance” (slide 1 of 2) • Definition of “success” or “performance” is often politically contested for the same program • Many statutes do not specify goals or purposes in detail • There may be trade-offs among potentially competing values (efficiency, effectiveness, fairness, service, etc.) • Multiple audiences bring their own perspectives and informational needs • Agency program staff • Agency leaders • Congress • President and OMB • Service delivery partners • Non-federal stakeholders • The public CRS-16
  • 17.
    Defining “success” and“performance” (slide 2 of 2) • Unit of analysis: multiple angles on performance, broken down by… • • • • • • • • • • • Agency Program Policy Strategy Activity (mission and mission-support) Goal Outcome (end outcome and intermediate outcome) Output Metric, measure, or indicator Clientele Groups of any of the above • Multiple potential research questions and corresponding methods of evaluation, analysis, and measurement CRS-17
  • 18.
    Thinking about “performance”: organizeby program, goal, or something else? Source: CRS. CRS-18
  • 19.
    Potential frameworks forevaluation, measurement, and analysis CRS-19
  • 20.
    How a policymay work: logic models Source: adapted from Hatry (2006). CRS-20
  • 21.
    Looking across programsand agencies Source: CRS. CRS-21
  • 22.
    Potential for perverseincentives Source: adapted from Fisher, Schoenfeldt, and Shaw (2006). CRS-22
  • 23.
  • 24.
    “Evidence” and policy •What is “evidence”? In practice… • Retrospective (e.g., evaluations, measurements, evaluation syntheses) • Prospective (e.g., policy analysis tools) • Current-day (e.g., values, ethics, risk preference) • What constitutes “use” of evidence? Arguably, when evidence informs… • Thinking • Oversight and monitoring • Policy making • What makes “evidence” and its presentation appear credible? Assessments for credibility may look for… • Appropriate methods (often, multiple methods) • Definition(s) of success • Fair representations about performance CRS-24
  • 25.
    Some potential issuesfor Congress • Congressional consultations and defining “success” • Agency and OMB representations about performance • Oversight, transparency, and public participation • Crosscutting policy areas • Design and implementation of the law • Serving Congress’s needs, agencies’ needs? • Promoting both improvement and accountability? • Do agencies have the necessary capacity—staff, skills, technology, funding—to implement the law? CRS-25
  • 26.