SlideShare a Scribd company logo
1 of 95
Download to read offline
U.S. Department of Labor (DOL)
Employment and Training Administration (ETA)
Occasional Paper Series (ETA-2015)
Making Performance Measures Work
in U.S. Workforce Development Programs
Prepared by: Allison Dunatchik, Jan Thomas Hagen,
Stephanie Koo, Jung-A Lee & Malemnganba Chenglei Wairokpam
London School of Economics and Political Science
(London, England)
March 26th
, 2015
This paper was funded, either wholly or in part, with Federal funds from the U.S.
Department of Labor, Employment and Training Administration (ETA) under Contract
Number DOLQ121A21885, Task Order DOLU121A21908. The contents of this
publication do not necessarily reflect the views or policies of the Department, nor does
mention of trade names, commercial products, or organizations imply endorsement of same
by the U.S. Government. This paper was one of two prepared under the 2015 ETA Research
Papers Program, which awarded funding to graduate students from two Capstone Teams
from two renowned International Schools: The London School of Economics and Political
Science from England; and The Institut d'études politiques de Paris from France to conduct
original research on topics relating to the workforce investment system.
1
Abstract
The recent introduction of the Workforce Innovation and Opportunity Act, a new piece of
legislation to supersede WIA and alter DOL’s performance management system, makes a
systematic assessment of the strengths and weaknesses of the current WIA PMS timely,
relevant, and important. This report evaluates the success of three key elements of the WIA
performance management system – (1) performance measurement, (2) performance data,
and (3) performance incentives – in promoting DOL’s objectives: effectiveness, efficiency,
and equity in the delivery of WIA services and training. Employing a three-pronged analysis
of WIA performance management tools – a study of the relevant literature, the application of
a theoretical model, and an empirical investigation – we draw ten key findings:
1. Current WIA performance management tools may sufficiently inform DOL of
performance on effectiveness, efficiency and equity goals, so long as that information
is timely and accurate.
2. The performance measurement and incentive system’s exclusive focus on
effectiveness objectives may preclude states from pursuing other DOL objectives.
3. The WIA performance incentive structure may not provide agents with sufficient
motivation to align their actions to DOL objectives.
4. The WIA common measures paint an ambiguous picture of state performance on
effectiveness goals.
5. Currently, DOL does not collect sufficient data to gauge the relevance of WIA
trainings and services provided within states.
6. States vary vastly in efficiency according to our CPJ measure. We find that several
drivers of this variation may be at work.
7. Investigation into state WIA Annual Reports reveals a substantial demand for further
development in measuring and understanding cost efficiency and its drivers.
8. Despite DOL’s explicit equity objective, individuals with hard-to-serve characteristics
are not generally more likely to receive training.
9. DOL’s data reporting procedures produce substantial data lags, which hinder the
Department’s ability to observe current state and local performance and limit the
impact of the performance management system on program accountability.
10. The volume of inconsistencies and errors contained in WIA record data draws into
question the extent to which the data can be used to draw conclusions about WIA
performance that are reliable and comparable across states.
Based on this evaluation, we provide seven practical recommendations for how DOL might
improve performance management tools under WIOA to optimize their impact on program
outcomes.
2
Table of Contents
Abstract.......................................................................................................................... 1
List of Figures................................................................................................................ 4
List of Tables ................................................................................................................. 4
List of Abbreviations...................................................................................................... 5
Executive Summary....................................................................................................... 6
1. Introduction...........................................................................................................11
1.1 Research Aim.......................................................................................................12
1.2 Background on the WIA Programming and the Public Workforce System .......12
1.3 Report Outline.....................................................................................................14
2. Effectiveness, Efficiency, and Equity ...................................................................15
3. Literature Review...................................................................................................17
3.1 Influencing Behavior through Performance Measurement and Incentives .......17
3.2 Performance Measurement in the Public Sector ................................................19
3.2.1 The Effectiveness of Performance Measures in the Public Sector.......................... 20
3.3 Performance Measurement at the U.S. Department of Labor........................... 20
4. Theoretical Model................................................................................................. 22
4.1 Principal-Agent Framework ............................................................................... 22
4.2 Principals and Agents in PWS and their Interests............................................. 22
4.3 Reducing Asymmetries of Information.............................................................. 23
4.3.1 WIASRD Data ................................................................................................................. 24
4.3.2 Annual Report Requirements......................................................................................... 24
4.4 Aligning Agent Incentives.................................................................................. 25
4.4.1 Commitment Problems................................................................................................... 25
4.4.2 Evaluating the WIA Incentive Scheme ........................................................................ 26
5. Quantitative Analysis of Public Use Performance Data ...................................... 29
5.1 Analytical Approach and Choice of States ......................................................... 29
5.2 Effectiveness....................................................................................................... 30
5.2.1 Improving Effectiveness Measures............................................................................... 34
5.2.2 Additional Measures of Effectiveness .......................................................................... 34
5.3 Efficiency............................................................................................................ 36
5.3.1 Discussion of Results ...................................................................................................... 36
5.4 Equity ................................................................................................................. 45
5.4.1 Discussion of Results ...................................................................................................... 48
6. Data considerations.............................................................................................. 54
6.1 Time Lags........................................................................................................... 54
6.2 Inconsistencies in Reporting ............................................................................. 54
7. Summary and Recommendations ........................................................................ 57
3
Works Cited.................................................................................................................. 60
Appendix...................................................................................................................... 64
Appendix A – Terms of Reference........................................................................... 64
Appendix B – Key Definitions and Outtakes .......................................................... 68
Appendix C – Quantitative research and data......................................................... 73
Appendix D – State Performance, Distance from Target, and Cost-Per-Job
Rankings .................................................................................................................. 77
Appendix E – Results from Logit Regression Specification ................................... 78
Appendix F – WIA PY12 Individual Level Performance Data Codebook............... 79
4
List of Figures
Figure 1.1 – The Public Workforce System.................................................................................... 13	
  
Figure 4.1 – The Principals and Agents of the WIA Title IB Adult Program
and their Respective Interests.................................................................................................. 23	
  
Figure 4.2 – Focusing Agent Efforts on Effectiveness ................................................................ 26	
  
Figure 5.1 – Relationship Between Targets and Performance, PY12 ........................................ 31	
  
Figure 5.2 – AEER Distance from Target, PY08-PY12 .............................................................. 32	
  
Figure 5.3 – AAER Negotiated Targets, PY08-PY12................................................................... 33	
  
Figure 5.4 – AEER Actual Performance, PY08-PY12................................................................. 33	
  
Figure 5.5 – Cost per Job, PY08-PY12........................................................................................... 37	
  
Figure 5.6 – State Variation in Cost Per Job, PY12 ...................................................................... 38	
  
Figure 5.7 – State Variation in Adult Entered Employment Rate, PY12.................................. 38	
  
Figure 5.8 – Relationship Between Cost per Job and AEER, PY12.......................................... 39	
  
Figure 5.9 – Cost Per Job, Western States, PY08-PY12............................................................... 41	
  
Figure 5.10 – Cost Per Job, Eastern States, PY08-PY12.............................................................. 42	
  
Figure 5.11 – Cost Per Job, Midwestern States, PY08-PY12 ...................................................... 43	
  
Figure 5.12 – Cost per Job, Southern States, PY08-PY12 ........................................................... 44	
  
List of Tables
Table 4.1 – DOL Objectives............................................................................................................. 24	
  
Table 4.2 – WIA Adult Program Common Measures.................................................................. 25	
  
Table 5.1 – Cost Per Job PY08-PY12, by State............................................................................. 36	
  
Table 5.2 – Cost per Job and AEER Rankings PY12, by State .................................................. 37	
  
Table 5.3 – Co-Enrollment and Cost per Job by State................................................................. 40	
  
Table 5.4 – PY12 Proportion of Individuals with Hard-to-Serve Characteristics.................... 41	
  
Table 5.5 – Proportion of Individuals with Hard-to-Serve Characteristics, Eastern States ... 42	
  
Table 5.6 – PY12 Proportion of Individuals with Hard-to-Serve Characteristics,
Midwestern States...................................................................................................................... 43	
  
Table 5.7 – PY12 Proportion of Individuals with Hard-to-Serve Characteristics.................... 44	
  
Table 5.8 – Likelihood of Selection into WIA Training Based on Personal Characteristics,
US as a whole............................................................................................................................. 48	
  
Table 5.9 – Likelihood of Selection into WIA Training Based on Personal Characteristics.. 50	
  
Table 5.10 – Likelihood of Selection into WIA Training Based on Personal Characteristics 51	
  
Table 6.1 – Reporting Errors and Inconsistencies in WIA Participant
Demographic Information....................................................................................................... 55	
  
5
List of Abbreviations
AEER Adult Entered Employment Rate
AJC American Job Centre
CPJ Cost Per Job
DOL Department of Labor
ETA Employment and Training Administration
ETP Eligible Training Provider
GRPA Government and Performance Results Act
ITA Individual Training Account
JTPA Job Training Partnership Act
LWIA Local Workforce Investment Area
LWIB Local Workforce Investment Board
LPM Linear Probability Model
PM(s) Performance Measure(s)
PMS Performance Management System
PWS Public Workforce System
PY Program Year
SWIB State Workforce Investment Board
WIA Workforce Investment Act
WIASRD Workforce Investment Act Standardized Record Data
WIB Workforce Investment Board
WIOA Workforce Innovation and Opportunities Act
6
Executive Summary
Since 1998, programs under the Workforce Investment Act (WIA) have served as the United
States Department of Labor’s primary vehicle for providing job training and employment
services to American jobseekers. WIA programs seek to place Americans looking for work
into well-paid jobs and supply employers with well-skilled, well-prepared workers. To this
end, the WIA programs provide individuals with tailored services, from employment search
assistance to skills gains, to ensure their job market success.
Recognizing the need for job training and services that are locally relevant and in demand,
DOL grants state and local agencies substantial control over the design and implementation
of federally funded WIA programs. To keep tabs on these highly decentralized programs,
DOL has long employed a performance management system (PMS) for WIA funded
programs with the aim of ensuring accountability, encouraging high quality of services, and
monitoring the performance of these locally implemented programs.
When crafted and administered well, performance management systems can be powerful
tools for ensuring that employees pursue desired objectives. Drawing from WIA legislation
and DOL strategic documents, we define DOL program objectives along three broad
categories: effectiveness, efficiency, and equity in the delivery of WIA services. These goals
are further expanded in the figure below.
DOL Objectives
The recent introduction the Workforce Innovation and Opportunity Act, a new piece of
legislation to supersede WIA and alter DOL’s performance management system, makes a
systematic assessment of the strengths and weaknesses of the current WIA PMS timely,
relevant, and important. This report evaluates the success of three key elements of the WIA
performance management system – (1) performance measurement, (2) performance data, (3)
and performance incentives – in promoting DOL’s objectives. Based on this evaluation, we
provide recommendations for how DOL might improve these management tools to
optimize their impact on program outcomes.
Management Systems, Performance, and Outcomes
We begin with a review of literature from public administration, economics, and
management to better understand the landscape of performance management systems in the
public sector, paying specific attention to the role of performance measurement and
incentives. This investigation brings to light the mechanisms through which performance
7
managements systems influence program outcomes. Performance measures (PMs) define the
employee performance dimensions or results of interest to managers. Performance data
produced through PMs inform managers of employee performance. An incentive system
allows managers to link performance results to rewards or sanctions with the aim of
reinforcing good behavior and disincentivizing bad behavior. Problems may arise, however,
when PMs do not measure the right things, data is unreliable, or incentives lead to distorted
behavior. A study of DOL performance management systems in the past reveals each of
these issues to have occurred.
Principals, Agents and Interests
Next, we develop a theoretical model that frames the relationship between DOL and state
and local WIA program actors from a principal-agent perspective. We propose that the
decentralized nature of WIA programming produces a principal-agent problem in two ways:
the agents (state and local actors) may have interests distinct from the principal’s (DOL’s)
objectives, and asymmetries of information exist such that the principal cannot directly
observe the actions of the agent. These two issues may lead to moral hazard, where agents
pursue their own interests at the expense of the principal’s defined objectives of
effectiveness, efficiency, and equity. With these issues in mind, we assess the extent to which
WIA PMs and data sufficiently reduce asymmetries of information, and whether WIA
performance incentives adequately align agent incentives to promote the principal’s
objectives.
Key findings:
1. Current performance management tools may address the asymmetry of information
between DOL and its agents through the solicitation of extensive record data and
state reports, so long as that information is timely and accurate;
2. The performance measurement and incentive system’s exclusive focus on
effectiveness objectives may preclude states from pursuing other DOL objectives.
Because targets and incentives are only attached to effectiveness measures, agents are
not able to credibly commit to pursuing all three objectives. This is particularly true
where resources are limited and pursuing all three objectives would come at the
expense of performance on effectiveness.
3. The incentive structure, as is, may not provide agents with sufficient motivation to
align their actions to DOL objectives. Rewards lack proportionality and timeliness
and sanctions are not consistently enforced, reducing their credibility as threats.
Evaluating Effectiveness, Efficiency and Equity
Expanding on the conclusions drawn in our theoretical model, we conduct an empirical
investigation of state-level performance on each of the three Department objectives. Using
Program Year 2013, Q3 WIASRD individual-level data and a constructed panel of
aggregated state-level performance data for Program Years 2008-2012, we evaluate how well
WIA performance measures and data inform the Department of state performance on DOL
objectives, and how well they work to drive state performance toward those goals.
Effectiveness
8
We evaluate how well current WIA PMs promote DOL’s effectiveness objectives by
analyzing state performance compared to their performance targets over time. We construct
a measure to indicate how closely states perform to their projected target and observe trends
over time and across states in an effort to gauge effectiveness performance.
Key findings:
4. While the WIA common measures certainly speak to several of DOL’s effectiveness
objectives, as discussed above, they paint an ambiguous picture of state performance.
It is difficult to compare performance across states, given that state targets vary
substantially, based on such things as past performance and economic and
demographic characteristics. This issue can be addressed, in part, by comparing the
gaps between states’ actual and targeted performance, but this effort is complicated
by large fluctuations in targets over time, making it difficult to determine how much
of the gap is explained by movement in performance or movement in the target.
5. Currently, DOL does not collect sufficient data to gauge the relevance of WIA
trainings and services provided within states. Although DOL requires states and
Local Workforce Investment Boards (LWIBs) to identify the skills and trainings in
high demand by local employers, no measure exists that would allow DOL to
determine the proportion of trainings provided that fulfill this criteria, nor does one
exist that would determine how frequently the jobs that participants enter into post-
training are related to the WIA training they received.
Efficiency
To test whether agents are successful in ensuring efficiency in their delivery of WIA services,
we calculate a Cost Per Job (CPJ) measure by dividing annual federal WIA allotments by the
number of WIA Adult Program exiters who entered employment in the different states for
each program year. We compare CPJ within and across regions of the country.
Key findings:
6. States vary vastly in efficiency according to our CPJ measure. We find that several
drivers of this variation may be at work:
a. Co-enrollment rate: the extent to which states’ WIA participants are co-
enrolled in other non-WIA job-training programs and services may
substantially impact cost efficiency across states. In general, but not always,
we observe a correlation between high co-enrollment and low CPJ.
b. Prioritization of equity: the proportion of individuals with substantial barriers
to employment, such as those with disabilities or limited English proficiency,
has consequences for cost efficiency. We find that states serving greater
proportions of these individuals tend to have higher CPJ numbers.
c. Prioritization of effectiveness: states placing emphasis on effectiveness PMs
may do so at the expense of efficiency, spending more money per participant
9
to increase effectiveness. We find a weak correlation between high
performance and high CPJ.
d. Cost of local training programs: large variations exist in the costs of the types
of trainings that states and LWIBs provide. However, these costs are not
recorded in WIASRD data or in any publically available DOL source,
precluding a systematic analysis of these costs.
7. Investigation into state WIA Annual Reports reveals a substantial demand for further
development in measuring and understanding cost efficiency and its drivers. Several
states have begun to develop measures of Return on Investment (ROI) from WIA
training programs, although methodologies are starkly different across states.
Equity
To test whether states seem to prioritize those individuals with hard-to-serve characteristics
(those over 55 years of age, high school dropouts, individuals with a disability, single parents,
individuals with limited English proficiency and individuals with low income) in the delivery
of WIA services, we run a linear probability model on several states to estimate how having
hard-to-serve characteristics influences an individual’s probability of being selected into WIA
training, the highest and most resource-intensive level of WIA services. We also re-run our
tests as logit models to check for robustness, which confirm our results. If states are
pursuing DOL’s equity objective, we should observe that having hard-to-serve characteristics
increases the likelihood of being selected into training.
Key finding:
8. Despite DOL’s explicit equity objective, individuals with hard-to-serve characteristics
are not generally more likely to receive training. This may both indicate a failure to
promote equity, and state-level differences in priorities of hard-to-serve individuals
from different agents. A measure of equity could facilitate such priorities, and
increase accountability towards this objective, while leaving room for local
considerations. The performance measurement system currently in place under WIA
does however not appear to promote the Department’s goal of equitable service
delivery.
Data Considerations
Revisiting our previous caveat that the usefulness of WIA’s performance measurement is
contingent upon the quality of data collected, we provide a brief assessment of the reliability
of WIASRD data.
Key findings:
9. Data reporting procedures produce substantial data lags. These delays hinder DOL’s
ability to observe current state and local performance, limiting the impact of the
performance management system on program accountability. Because of such
substantial data lags, DOL is also unable to provide timely rewards and sanctions,
which is necessary for incentives to be fully effective.
10. In conducting a quantitative analysis of our sample of ten states, we find a
concerning number of data collection and reporting inconsistencies. This volume of
10
inconsistencies and errors draws into question the extent to which the data can be
used to draw conclusions about WIA performance that are reliable and comparable
across states.
Recommendations
Based on our analysis and key findings, we have identified several potential pathways to
increase the effectiveness of the current PMS. The following recommendations could prove
particularly effective if considered in relation to the upcoming implementation of WIOA.
The Department of Labor should:
1. Consider weighting state and local ex-post performance rather than
weighting performance targets. This method would better gauge effectiveness
performance and encourage states to pursue equity objectives.
2. Create and make use of a training and job category match measure that
would reveal important information about the ability of states and LWIBs
to provide relevant training. Such a measure would place greater emphasis on
providing in-demand skills. The Department should also consider developing a
demand-driven skills training match measure.
3. Develop and make use of a standardized efficiency measure. States are
currently developing ROI models of their own, and such a measure could be
based on these. However, a simple cost per job measure would be easy to
implement, and effective at measuring efficiency. A standardized measure
developed by DOL would also allow for comparison of states efficiency in
service delivery.
4. Collect and report individual-level costs of training programs and intensive
services in WIASRD. This would allow for more precise calculations of cost per
successful intervention, which could form an easy to implement standardized
efficiency measure as described above.
5. Ensure that incentives are administered in a timely and consistent way to
establish a clear link between performance measures and incentives. When
agents clearly understand how and why their performance is being rewarded or
sanctioned, incentives can be more effective at inducing desirable and expected
behavioral responses. Timely grants allow the agents to consciously evaluate their
performance and identify more directly the changes needed for improvement.
6. Incorporate efforts to improve data quality into the performance goals and
the incentive structure. The assessment of performance is based on the data
collected. The Department should give greater priority to improving data
infrastructure and training during the transition to WIOA. It should also take
advantage of the legislative transition phase to facilitate the improvement in data
collection and reporting procedures.
7. Actively support and promote any study that makes use of the introduction
of WIOA to assess the impact of WIA job training programs. The
Department should aim to collect annual WIASRD panel data to facilitate future
studies of impact.
11
1. Introduction
The Public Workforce System (PWS), a network of federal, state, and local agencies headed
by the US Department of Labor (DOL), aspires to place American jobseekers into well-paid
work and supply employers with well-skilled, well-prepared workers. To this end, the PWS
provides individuals with tailored services, from employment search assistance to skills gains,
to ensure their job market success. The provision of such services by the System produces
better-trained and better-skilled workers for employers. The Workforce Investment Act
(WIA) of 1998 has been an instrumental vehicle for job training and employment services
within PWS, but is to be superseded by the Workforce Innovation and Opportunity Act
(WIOA) of 2014.
With program delivery highly decentralized from Department control, DOL has long
employed a performance management system (PMS) in its federally funded employment and
job training programs with several aims in mind: to ensure accountability and quality in the
local delivery of federal programs while allowing for flexibility, to monitor and inform DOL
of state and local workforce training activities, and to create an incentive system through
which DOL can drive high performance by allocating rewards to well-performing states and
sanctions to poor-performing states.
The WIA PMS consists of six main elements (Otley, 1999; National Performance Review,
1997):
1. Performance planning: defining and setting performance objectives, goals, and
metrics.
2. Quality management: quality assurance, quality control, and inspections.
3. Performance measurement: designing a monitoring program and monitoring
frequency.
4. Performance data: assessing and validating collected performance data, evaluating
and utilizing performance information.
5. Performance feedback mechanisms: performance assessment, feedback and feed-
forward loops that enable the organization to learn from its experience and adapt its
current behavior and resulting sanctions and rewards.
6. Corrective actions/performance improvement planning: post report planning.
The implementation of WIOA promises to make substantial adjustments to the current the
PMS, particularly with regard to performance measurement. This may have important
implications for program outcomes. Although a common saying about performance
measurement is “what gets measured gets done,” the relationship between performance
measurement and program outcomes is not well understood. As such, this legislative change
to WIOA marks an important opportunity to take stock of the strengths and weaknesses of
the current PMS under WIA, with specific attention to PMs, to inform the design and
implementation of performance management tools under the new act.
12
1.1 Research Aim
Our aim in this research is three-fold. First, we seek to shed light on the relationship between
three key elements of the WIA performance management system - (1) performance
measurement; (2) performance data; (3) and performance feedback mechanisms - and
program outcomes. To accomplish this, we narrow our focus to a specific WIA workforce
training program, the Title IB Adult Program, and critically assess the PMs DOL currently
employs to track program performance, the performance incentives attached to these
measures, and the data generated in the process in an effort to determine the usefulness of
these controls in eliciting the Department’s desired program outcomes.
This assessment informs our second aim: to identify methods of optimizing the impact of
performance measures, data, and incentives on program outcomes. We draw on our analysis
of these public sector controls under WIA to provide DOL recommendations for how they
might be improved in WIOA to better influence program outcomes.
Our final aim is to raise important questions about the PMS employed by the Department
for future consideration. Through our investigation, we hope to provide a departure point
for subsequent research, and shape an agenda for researchers at DOL looking at the
effectiveness of the system.
1.2 Background on the WIA Programming and the Public Workforce
System
Before any analysis of the system, it is necessary to gain an understanding of WIA, how it is
delivered, and what constitutes the Title IB Adult Program. This section provides a brief
background to that end. The Workforce Investment Act (WIA) of 1998 is a federal act that
provides workforce investment activities through state and local workforce investment
systems to: (1) increase employment, job retention, earnings, and skills attainment, (2)
improve the quality of the workforce, (3) reduce welfare dependency, and (4) enhance the
productivity and competitiveness of the nation. The Act creates sixteen federally funded
workforce development programs that span four federal departments, including the
Department of Labor (Blank et al., 2011).
The workforce development programs run by DOL are delivered through the Public
Workforce System (PWS), which is a network of federal, state and local offices that are
federally funded and locally implemented. Figure 1.1 illustrates the hierarchy and role of each
tier of actors in this network. The Department of Labor heads the PWS and is responsible
for administrative oversight, funding and research, and policy guidance. State Workforce
Investment Boards (SWIBs) sit below DOL in the PWS hierarchy, and are charged with
setting state-level strategic objectives that reflect both local interests and federal goals. Both
SWIBs and Local Workforce Investment Boards (LWIBs) are comprised of representatives
from businesses, labor organizations, educational institutions, and community organizations.
LWIBs are accountable to SWIBs and responsible for setting local strategic direction,
establishing funding priorities, and determining what kind of job- and skills-training
13
American Job Centers (AJCs) should deliver. The LWIBs also determine how many AJCs are
required locally, where these should be located, and how they will operate. Finally, AJCs are
responsible for implementing WIA programming, and provide jobseekers with the resources
and trainings needed to find employment. AJCs are also responsible for the delivery of other
federal, state, and local job training programs apart from WIA.
Figure 1.1– The Public Workforce System
This report focuses on the WIA Title IB Adult program, which is available to all individuals
over the age of 18, constituting one of the largest federally funded employment and training
programs in the United States. Adult Program services are provided through One Stop
Career Centers. These services are provided in three tiers of progressing intensity: Core,
Intensive, and Training services. Participants must first utilize Core services before being
deemed eligible for Intensive services, and utilize Core and Intensive services before
receiving Training. Core services, which include job search services, tools, and labor market
information, are available to all jobseekers. Intensive services are available to individuals
requiring assistance beyond Core services (as determined by AJC caseworkers), and include
more comprehensive assessments of individuals’ skills and the development of individualized
employment plans, counseling, and career planning. Finally, Training services are provided
for those requiring the most assistance. Training services include both occupational and basic
skills training. The menu of training programs available to WIA Adult Program participants
is determined at the LWIB level, and is provided by select local Eligible Training Providers
(ETPs), such as local community colleges, which are approved at the local and state levels.
WIA Title IB funds these training programs (or a portion of them) through Individual
USDOL
Provides administrative oversight, funding,
Provides administrative oversight, funding,
research, policy, and guidance to State WIBs
State WIBs
Develops state level strategic plans and
funding priorities
Local WIBs
Develops local level strategic plans, funding
priorities and program training content
Local American Job Centres
Delivers WIA services and
contracts training
14
Training Accounts (ITAs) that are set up for participants who then select an appropriate
training program from an ETP.
1.3 Report Outline
Our evaluation of performance measures, incentives, and data under the WIA PMS is
comprised of six parts. First, we define DOL’s objectives through which we assess current
performance management. Second, we provide a comprehensive literature review of public
sector PMs and their role in broader public sector PMSs. Third, we develop a theoretical
model to explore how current DOL PMs and incentives influence agency behavior and
program outcomes. Fourth, we explore the relationship between performance indicators and
three primary DOL objectives using data from DOL Employment and Training
Administration (ETA) website and the WIASRD public use dataset. Fifth, we discuss
performance data issues encountered in our analysis and their implications for the
effectiveness of the PMS. Finally, based on our analysis, we provide a set of
recommendations for future improvements of DOL performance management.
15
2. Effectiveness, Efficiency, and Equity
In order to assess the relationship between WIA performance measures, incentives, and
program outcomes, we must first clearly establish the outcomes and objectives of interest to
DOL. This section draws from WIA legislation and DOL strategic planning documents to
outline the primary goals of WIA programming.
WIA is clear in the intended goals of the employment and job training programs and services
outlined in the Act, stating in the first paragraph:
“The purpose of this subtitle is to provide workforce investment activities,
through statewide and local workforce investment systems, that increase the
employment, retention, and earnings of participants, and increase occupational
skill attainment by participants, and, as a result, improve the quality of the
workforce, reduce welfare dependency, and enhance the productivity and
competitiveness of the Nation (WIA, 1998, Sec 106).”
Despite the clarity and directness of these stated goals, the approach that states and local
agencies should adopt in achieving them is less clear. Taken at face value, these goals may
suggest an approach that entails focusing training and resources on those most likely to
succeed, and thus most likely to positively influence performance measures. Such an
approach speaks to the prioritization of effectiveness in achieving the desired outcomes, and a
prioritization of efficiency in maximizing performance given a certain programming budget
allocation. Indeed, in outlining its performance management system, WIA describes the
system’s intention to, “to promote the efficiency and effectiveness of the statewide
workforce investment system in improving employability for jobseekers and competitiveness
for employers (WIA, 1998, Sec 136 (e) – (3)).”
In addition to efficiency and effectiveness goals, WIA also reveals concerns about equity in
the delivery of training services. In detailing how WIA program applicants should be selected
into services, WIA stipulates that, where funds are limited, “priority shall be given to
recipients of public assistance and other low-income individuals for intensive services and
training services (WIA, 1998, Sec 134 (d) – (4)(E)).” It further explains that, while most skills
training activities should link to employer demand, states may create additional services to
serve:
“Special participant populations that face multiple barriers to employment …
[which may include] … I) individuals with substantial language or cultural
barriers; II) offenders; III) homeless individuals; IV) other hard-to-serve
populations as defined by the Governor involved (WIA, 1998, Sec 134 (d) –
(4)(G)(iv)).”
16
The effectiveness, efficiency, and equity goals outlined in WIA have been reiterated in
subsequent DOL strategy documents. In DOL’s Strategic Plan for Fiscal Years 2014-2018, the
Department details WIA’s contribution to the Department’s mission to “promot[e] and
protect opportunity for all workers and their employers(DOL, 2013, p. iii)” by preparing
workers for better jobs (DOL, 2013). To do this, the Department charges the ETA with the
responsibility to “advance employment opportunities for US workers in 21st
century demand
sectors and occupations using proven training models and through increased employer
engagement and partnerships” (DOL, 2013, p. 11) through WIA programming. It also
recommits to equity concerns, specifying that the Department should maintain a “focus on
the hardest to serve populations, assuring that these groups expand their economic
opportunities and do not get left behind (DOL, 2013, p. 15)”.
Given the Department’s consistent advocacy of all three goals of effectiveness, efficiency,
and equity in the delivery of WIA services, we adopt an approach that evaluates whether
PMs, performance data, and incentives work to promote these objectives. We use a
principal-agent framework to assess whether the Department’s three goals conflict with one
another in influencing the behavior of state and local agents, and assess whether the WIA
performance measurement system elicits an over emphasis of any of the three goals at the
expense of others. While effectiveness-efficiency-equity trade-offs are common in
organizations with finite resources, we find that because performance standards and
performance incentives are linked exclusively to effectiveness goals, state and local agents
have little motivation to exert and resources pursuing equity and efficiency goals.
17
3. Literature Review
3.1 Influencing Behavior through Performance Measurement and
Incentives
Performance measures (PMs), data, and incentives are three important elements of a broader
management control system, which exist to manage employee behaviors, guiding them to the
“right” behaviors toward the achievement of organizational goals. PMs provide measurable,
observable information about the results of an employee’s actions that allow evaluators to
observe their progress toward performance targets. The existence of performance data
allows managers to link the results of employee efforts to an incentive system of rewards and
sanctions. Incentives linked to results encourage employees to see beyond the actions they
take, and focus, instead, on the consequences of their actions.
Collecting data on performance serves three main aims and purposes, all of which help
facilitate performance improvement. For one, it allows for the identification of the best and
worst performers, allowing managers to, in theory, find the best practices of the best
performers and improve the performance of the worst performers. Secondly, it allows
managers to identify which dimensions an organization is struggling in. That is, PMs can
identify what organizations need to work on trying to improve. And finally, being able to
chart performance allows employers to monitor the efficient use of scarce financial resources
(Burgess et al., 2002).
Theoretically, performance measurement and incentive systems can influence results, as long
as three basic conditions are met: (1) both the employees and the organization know what
the desired results are, (2) employees are actually able to influence the results for which they
are being held accountable, and (3) the results being controlled are effectively measurable.
For results controls to actually evoke the “right” behaviors, or the behaviors desired by the
managers, an additional set of conditions must be met. PMs must be (1) congruent, (2)
precise, (3) objective, (4) timely, and (5) understandable (Osborne et al., 1995; Merchant and
Van der Stede, 2011) and incentive should be (6) valuable to the agent, (7) large enough and
visible enough to influence behavior, and (8) administered in a timely manner (Osborne et
al., 1995; Merchant and Van der Stede, 2011).
First, the designer of a management control system must make sure that a results control is
congruent with the ultimate goals of the organization by carefully choosing the right results
to measure and placing appropriate weights on the different measured areas, so as to avoid a
scenario in which employees attempt to game the system by focusing effort on areas in
which it is easiest to attain good performance, and neglecting those areas that might be
harder to perform well in, but might be more valuable to the organization. Secondly, a PM
must be precise; that is, a results measure must be able to be reported as a specific numerical
value, not just as a range of values. Third, a PM must be objective, or free from bias. The
18
more discretion the evaluated employees have in how their performance is being measured,
the less objective a PM is. Fourth, a PM should be timely to effectively elicit the desired
results. That is, actual performance should not be separated either from the measurement of
the results or from the provision of rewards by very much time. And fifth, a PM should be
understandable. In other words, employees must know what they are being held accountable
for, and employees must understand what they can do to influence the measure.
Once PMs are set to fulfill the above criteria, an incentive system of rewards and sanctions
must be set, such that employees are motivated to improve on the dimensions in which they
are measured. In order for incentives to evoke the desired behavioral response among
employees, they must offer something of value in exchange for good performance in the case
of rewards, or provide some kind of meaningful penalty to employees in the case of
sanctions. Incentives must also be large enough such that they outweigh the effort costs
employees may endure pursuing the desired performance outcome. Finally, incentives must
also be administered in a timely manner so that employees are able to sufficiently link their
effort and actions to consequences (Merchant and Van der Stede, 2011).
Though theory speaks to the potential improvements that could be borne of performance
measurement, a rich body of literature addresses their potential negative consequences of,
specifically the potential gaming responses to incentive systems. Linking rewards and
sanctions to performance could distort incentives and elicit unintended dysfunctional
responses that do not align with desired outcomes. Because rewards are allocated based on
measured performance, PMs pose agents with a direct trade-off between: (1) performing as
well as possible on the measures and (2) engaging in welfare-improving activities that may
divert them from maximizing their performance against PMs.
When given explicit incentives, agents could respond by changing their behavior. Agents
could respond by “gaming” the incentive system, acting strategically to raise their
performance on a measure, and therefore the sum of their potential reward, without
necessarily giving thought to furthering the goals of the organization for which they work
(Dixit, 2002). Gaming responses could take different forms, from cream skimming to
strategic timing. Cream skimming is the practice of providing a service to only those
customers that are more profitable to serve, because they are high-value, low-cost, or both.
Strategic timing is the practice of manipulating how and when performance is reported to
maximize awards, for example, by transferring performance in excess of the target level to
the next reporting quarter. This is considered to be a gaming response if it leads to a costly
misallocation of resources (Courty and Marschke, 2004).
It is often said of performance management systems that, “what gets measured gets done.” It
follows that, much as management control systems can be powerful tools to direct agent
behavior toward desirable goals, they can just as easily lead agents to act in perverse, welfare-
decreasing ways.
19
3.2 Performance Measurement in the Public Sector
Performance measurement and incentives have long had a presence in the private sector,
where financial stakeholders have strong, direct interests in monitoring the performance of
their corporate agents. These management tools came into use in the public sector with a
wave of other private sector practices in the late 1980s, collectively known as “New Public
Management,” the whole of which was intended to “reinvent government” and make the
public sector more competitive, results-oriented, and mission-driven (Osborne and Gaebler,
1992). The demand for performance measurement continued into the 1990s, a decade in
which the ruling theme, at all levels of government, was a call for the documentation of
performance and explicit outcomes of government action (Radin, 2000).
Despite their roots in the private sector, performance measurement and incentive systems
operate much differently in the public sector. Public sector organizations have features
distinct from those of private sector organizations; government bureaucrats do not cater to
shareholders, but rather to politicians, financers, professional organizations, and users (Dixit,
2002). Additionally, civil servants in public sector organizations may have different sources
of motivation than what is normally assumed of agents in the incentive literature, and may
therefore not respond to potential rewards and sanctions in the same way private sector
employees might (Le Grand, 1997).
Principal-agent theory assumes rational, self-interested agents acting in ways that maximize
their payoffs (Eisenhardt, 1989). In this framework, the principal uses PMs to align the
agent’s interests with his or her own interests. The standard framework is rendered more
complex in the public sector setting, where agents are likely to work in situations with
multiple principals and multiple tasks. Public agents likely have different tasks assigned by
different principals who are interested in pursuing different objectives (Propper and Wilson,
2003).Different principals having different objectives may lead to defined tasks that are
difficult to pursue simultaneously, if not completely contradictory. Having multiple principals
with varying degrees of power and areas of emphasis creates a complicated system for the
agents, one in which the agents are directed from a number of sources, making them
accountable to different authorities with varying emphases on outcomes. This diffusion of
direction complicates management. Even if tasks were aligned, the mere multiplicity of
having more than one task and principal makes having high-powered incentives like direct
financial rewards for performance less useful in the public than it is in the private sector
(Dixit, 2002). As such, PMSs may not be able to elicit the same level of incentivizing power
in the public as in the private sector.
It is also the case that public sector employees do not only face different institutional
features, but also have different sources of motivation from private sector employees,
following the intuition that public employees comprise a group of self-selected and mission-
driven individuals. Ashraf, et al. (2014) find evidence of the relative strength of non-
monetary rewards in motivating agents hired to perform pro-social tasks: tasks that create
value not just for the principal, but also for society at large. Hairdressers tasked with selling
condoms to their patrons exerted more effort and generated higher sales when awarded stars
20
to put on a publicly visible thermometer chart displaying progress than when offered
financial margins, small or large (10% and 90% commission on the retail price, respectively)
on their sales. Their findings suggest that PMSs designed for the public sector may want to
envision different types of rewards and sanctions for their incentive systems, taking into
account the potential differences in motivation and values.
3.2.1 The Effectiveness of Performance Measures in the Public Sector
Performance measurement and incentives have been used in a variety of public service areas.
To help assess the performance of these tools in the public sector, we refer to empirical
studies of past U.S. job training programs. Heckman et al. (2002) find that PMs may not
sufficiently link to the long-term goals of the Department of Labor. That is, job-training
centers were not adding value in their pursuit of performing well on PMs in the short-term.
Courty and Marschke (1997; 2004) also find evidence of gaming responses to performance
targets, and of a disparate impact on long-term organizational goals. Barnow (2000) also
questions the effectiveness of using PMs to achieve organizational goals in the public sector.
He notes that sanctions for failure to meet JTPA performance targets were not always
imposed, with governors instead opting to modify the PMs in response to poor
performance. He concludes that the rankings on JTPA measures do not correlate with
rankings on program impact and organizational goal achievement. There is empirical
evidence supporting the idea that stronger performance incentives may be more encouraging
of gaming responses. Heinrich (1999) explored one particular job-training center in Illinois
that used performance-based contracts to choose its service providers. These providers
responded to the measures by cream skimming and providing those services that were less
incentive, and therefore, more low-cost. Heckman, et al. (1997), after studying what is argued
to be a more representative job-training center in Corpus Christi, find evidence to the
contrary – that case workers were more interested in equity, that is, that they were more
interested in serving those that needed help the most. And although the evidence on cream
skimming is neither consistent nor strong, qualitative analyses of the system, including
interviews of local service providers, suggest its existence (Barnow and Smith, 2004).
3.3 Performance Measurement at the U.S. Department of Labor
In the context of the PMS used to evaluate job training programs provided by the U.S.
Department of Labor, there are two main reasons for performance measurement: (1) to
monitor and (2) to improve performance (Barnow, 2000). The Department established a
pioneer performance measurement system under the Job Training Partnership Act (JTPA) in
1982. The Act, which established federal assistance programs providing job training and job
placement assistance to low income and low skilled workers, sought to bring accountability
to its highly decentralized programs through the use of a performance measurement system.
The performance measurement system devised in the JTPA was unique in that it: (1) directed
performance measures on outcomes (such as the number of trainees placed in jobs, their
mean income, etc.) rather than on inputs and outputs (such as number of people trained)
(Heinrich, 2004), (2) it linked PMs across federal, state and local levels of government, and
21
(3) it provided performance-based financial incentives for state governments and local
program administrators based (Heinrich, 2002).
Literature assessing the JTPA’s performance measurement system has found that conflicting
goals within JTPA itself may have limited the effectiveness of the system. Although one of
JTPA’s explicit goals was to focus on those most in need of services, another goal of
government is to deliver services effectively, maximizing total net benefits from the program.
Because the PMs created under the JTPA such as mean earnings and job placement rate,
captured performance according to the latter, these measures may have incentivized local
program administrators engage in cream skimming (Heckman et al., 2002; Heinrich, 2002;
Courty and Marschke, 2011).
JTPA’s successor, the Workforce Investment Act (WIA) of 1998, sought to address this
problem by expanding the measures used to capture performance. The Act added several
credential rating measures to gauge trainee skills, education and training gains that result
from WIA programs. New measures of “customer satisfaction” (customers being both
trainees and employers) were also added. While these additional measures provided a more
nuanced view of performance, they also added to local data collection burden. The impact of
this additional burden was compounded by new requirements for state performance data
management systems imposed by WIA. Studies show that this additional burden has been
problematic, requiring some states to develop or redesign their data management software,
leading to substantial delays and data lags (GAO, 2002). Inadequate direction from the
Department of Labor on how to collect and record performance data under WIA has also
led to inconsistencies within the data, limiting the extent to which data from different
program administrators can be compared (GAO, 2013).
Under WIA, the targets are arrived at through negotiation between the Department and the
states. The negotiation process is supposed to reflect economic, geographic, and
demographic conditions of the state. This is distinct from the JTPA process in which the
targets were arrived at using a regression-adjusted procedure to incorporate economic and
demographic characteristics. Without a uniform negotiating procedure, performance targets
depend on how the states negotiate with DOL. Heinrich (2007) finds no consideration of
these factors, and in particular, education and race. She also finds a negative relation between
performance bonus size and performance. From 2009, the federal targets were adjusted for
demographic and economic conditions (DOLETA, 2009; see also Bartik et al., 2009). The
greater concern is that incentives are associated with the performance targets.
Beyond data issues, some have questioned whether the short-term PMs employed by the
WIA, and in PMs widely, are adequately linked to long-term program impacts (Heckman et
al., 2011). Indeed studies suggest that the relationship between WIA PMs and long-term
program outcomes are weak at best and negatively associated at worst (Ibid). Without short-
term measures that are sufficiently associated with long-term outcomes, DOL runs the risk
of incentivizing the delivery of programs and services that “hit the target but miss the point”
of the overall program (Ibid).
22
4. Theoretical Model
Experiences and findings in the literature provide a useful frame for our evaluation of WIA
performance measurement and incentives. Much of the literature speaks of performance
measurement systems as an accountability mechanism in a principal-agent relationship,
making useful propositions and predictions about agent behavior and related subsequent
outcomes. In this section, we adopt a framework rooted in principal-agent theory to
illuminate the dynamics of the relationships between the different actors that comprise the
Public Workforce System. In doing this, we systematically evaluate WIA’s PMs, incentives,
and performance data and assess their usefulness in bringing about DOL’s desired outcomes.
4.1 Principal-Agent Framework
In a principal-agent relationship, the principal hires an agent to act on her behalf, or make
decisions that impact the payoffs of the principal. A principal-agent problem exists when: (1)
an agent has incentives to act in ways that do not align with the principal’s interests, and (2)
there is an asymmetry of information whereby the principal does not know the true interests
of the agent and cannot directly monitor her agent’s behavior to ensure that the agent is
acting in the principal’s best interests. Where agent interests differ from those of the
principal, moral hazard may occur and the agent may pursue his or her own interests at the
principal’s expense. PMs and their resulting data can address this risk of moral hazard by
measuring performance outcomes – thereby reducing the asymmetry of information. An
incentive system that provides rewards or sanctions based on measured performance can
align agent incentives with principal’s desired outcome. In this chapter, we identify how
principal-agent problems may arise in the PWS and assess how successful performance
measurement and incentive controls under WIA can be expected to address these problems.
We begin by identifying the principals and agents in the PWS and their preferences, then
turn to assessing whether WIA performance measurement adequately reduce asymmetries of
information between DOL and its various agencies and whether the incentive system
successfully aligns agents’ incentives to the Department’s interests.
4.2 Principals and Agents in PWS and their Interests
Our analysis recognizes principals to be those who supply funding and design PMs for their
agents, with the ultimate aim of having the agents act toward the achievement of the
principal’s objectives. In this particular context, we have multiple principals (who occupy
different roles in a hierarchy of principals) and one agent. We classify principals as either
primary or secondary based on their specific functions in the Public Workforce System. The
Department of Labor is our principal of interest, as it supplies the financial resources to the
WIA Adult Title IB Program, sets the overall program objectives, and designs the PMs to
deliver the program’s intended benefits. Secondary principals also play a role in our analysis.
Secondary principals may, in practice, have very similar roles to the primary principal, but are
not providing funding in this model. In this group are state and local workforce investment
boards (LWIBs) that design targets and programs tailored to local needs, interacting directly
23
with the agents that are tasked to implement the programs. Secondary principals are however
distinct in that they must respond and report to the primary principal, who allot them their
budget appropriations. In setting state targets, the Department negotiates with the state,
which, in turn, negotiates with LWIBs in setting local targets. LWIBs then direct the program
and service delivery functions of the American Job Centers in their jurisdiction. In this sense,
states and LWIBs can act both as principals and as agents. Importantly, both primary and
secondary principals rely on their agents, the local implementing authorities, to make
decisions on their behalf and carry out the tasks necessary to achieve the Department’s
ultimate goals.
While the different levels of actors working to deliver the WIA Adult Title IB Adult Program
may have similar interests, Figure 4.1 illustrates how the multi-tier system creates several
points at which interests may diverge. It may be reasonable to assume that principals and
agents at all levels are in some way interested in the effectiveness, efficiency and equity goals
held by DOL, however actors at other levels must balance these interests with other, more
local concerns. Where these interests conflict with the primary principal’s goals, problems
may arise.
Figure 4.1– The Principals and Agents of the WIA Title IB Adult Program and their
Respective Interests
4.3 Reducing Asymmetries of Information
In the context of the Public Workforce System, asymmetries of information emerge because
of the fact that the Department of Labor, the primary principal, cannot directly observe the
actions and effort of the agents. In the introduction to this report, we identified the key
objectives that the Department of Labor seeks to achieve through WIA: effectiveness,
efficiency, and equity (outlined in Figure 4.1). If performance measurement under WIA
24
functions well, it should substantially reduce the information gap between DOL and its
agents. Analyzing two main WIA reporting requirements for agents, the WIA Standardized
Record Data (WIASRD) and annual WIA state reports, we find that the WIA measurement
system does collect information that can inform DOL of agents’ contributions to its
objectives, potentially addressing this aspect of the principal agent problem.
Table 4.1– DOL Objectives
4.3.1 WIASRD Data
By requiring agents to submit WIASRD data, DOL solicits many useful pieces of
information from agents regarding effectiveness and equity objectives. States are responsible
for submitting quarterly data on three groups of information: (1) individuals, (2) activities
and services, and (3) program outcomes. The first section reports data on the individuals
participating in these programs, including such things as gender, ethnicity, disability status,
and other demographic characteristics. The second reports information relating to the WIA
programs and includes information on the type and nature of training and services received.
The third section includes information on the achievements of participants after exit from
WIA, reporting their employment and salaries for several quarters after exiting a WIA
program. Together this data allows the Department to observe demographic composition of
individuals trained, the kinds of trainings participants receive, and their employment post-
training.
4.3.2 Annual Report Requirements
The WIA Annual Report narrative is another means to closing potential information gaps on
all three of DOL’s objectives. States are required to submit reports each year and state WIA
funding is contingent their submission. Annual Report narratives require, among other
things, performance data, information on the cost of the workforce investment activities
relative to their effect on the performance of participants, and a report of state activities
directed toward hard-to-serve populations. DOL also requires that states report on local
economic conditions that might affect performance, and a listing of the approved state
exemptions from federal WIA requirements with information on how the exemptions have
allowed for improved performance outcomes (DOLETA, 2012b).
Thus, WIASRD and annual state report requirements appear to solicit information needed to
provide DOL insight into agent actions along its objectives, potentially reducing the
information asymmetry problem. However, the usefulness of the information DOL collects
is largely dependent on how the data is used and the reliability of that data and information.
We return to these issues in greater detail in Sections 5 and 6 of this report.
25
4.4 Aligning Agent Incentives
Having established that the current WIA reporting requirements may sufficiently reduce the
asymmetries of information regarding agent contributions to DOL objectives, we analyze the
extent to which the performance feedback mechanism, that is the sanctions and rewards
provided by DOL, are successful in aligning agent interests with those of the principal. We
find that the current measurement and incentive structure creates a commitment problem for
agents in the PWS in pursuing all three broad categories of DOL objectives. Incentive
structures in place may therefore not be effective in eliciting the desired behavior among
agents.
4.4.1 Commitment Problems
Although DOL’s key interests (summarized in Table 4.1) span a range of objectives, the PMs
and incentivizing structure focus exclusively on effectiveness goals. DOL collects large
quantities of data and information, as discussed in the previous section, but agent
performance is measured exclusively along three key indicators: the rate at which program
participants enter employment post-program, the rate of employment retention and
participants’ average post-program earnings, as seen in Table 4.2. These measures are
enforced through a system of financial incentives. States that exceed their annual targets
along these measures are eligible to apply for additional funds in the form of grants, while
underperforming states are subject to sanctions. These sanctions and rewards function as a
commitment device to ensure that agents stay committed to improving performance on
DOL’s performance measures.
Table 4.2– WIA Adult Program Common Measures
Source: DOL, 2006
The current performance measurement system under WIA seems to lack consideration of
how program goals interact with each other in practice. Agents seeking to achieve equity
goals by focusing services on disadvantaged groups may fail to achieve efficiency objectives,
as the services and trainings required to assist those with substantial barriers to employment
may require more resources. These agents may also underperform on effectiveness objectives
as well, compared to other agents focusing training on those with fewer barriers to
employment, who are more likely to succeed. Other agents focusing their efforts solely on
achieving effectiveness measures may do so by cream skimming, at the cost of equity, or
reducing the number of total participants served so that each participant receives a greater
proportion of the agent’s resources, coming at the cost of efficiency. The fact that the WIA
26
incentive structure focuses exclusively on effectiveness measures is likely to substantially
influence the objectives agents pursue. Although it may be the case that agents within the
PWS share the principal’s efficiency and equity goals, agents at the state, LWIB, and service
delivery level may have little motivation to devote effort and resources toward achieving
them, because these goals remain unmeasured and without incentives to encourage their
pursuit. This is especially true where efficiency or equity objectives may come at the expense
of effectiveness. Thus, as seen in Figure 4.2, there is a move away from pursuing efficiency
and equity objectives towards pursuing incentivized effectiveness objectives only.
Figure 4.2– Focusing Agent Efforts on Effectiveness
This situation creates a commitment problem for the agent within the PWS. A commitment
problem is defined as situation in which an actor would like to commit to a certain course of
action, but is unable to do so because he can maximize his payoff by following a different
course of action. Because PMs and the WIA incentive system are only linked to effectiveness
goals, agents within the PWS cannot credibly commit to the pursuit of efficiency or equity
goals. As long as efficiency and equity are not incorporated into the performance feedback
mechanism, these objectives lack a commitment device. Incorporating PMs that address
efficiency and equity as a formal part of the incentive system may address this problem.
However, this will only be effective if the incentive system provides agents with credible
threats and/or promises to ensure that agents act in alignment with the principal’s interests.
4.4.2 Evaluating the WIA Incentive Scheme
Incentive theory predicts that performance measures, when linked to rewards and sanctions,
align the interests of agents with those of their principals. Agents will do what the PMs tell
them to do, as long as the incentive system is designed well. As explained in our literature
review section, and according to Merchant and Van der Stede (2011), for an incentive system
to be effective, we must ensure that (1) agents value what is provided through the incentive
system, (2) incentives are considered large and visible enough to the agents, (3) agents know
what actions are required of them, and these actions are achievable, and that (4) incentives
are awarded in a timely manner. WIA incentives take the form of rewards or sanctions, based
27
on whether the agent fails, meets or exceeds the performance target (As explained in
Appendix B). While Section 4.4.1 has addressed the issue of whether required actions are
known we explore if the remaining criteria are in place to provide effective incentives to the
agents.
Firstly, the reward grants under the current incentive scheme must be reinvested into the
program (DOLETA, 2014c). Therefore, it is difficult to conclude that agents value this
incentive, at least highly, as the reward will only allow the agent to improve and do better on
already established measures (Ibid), and thus not pursue any autonomous interest. If the
agent is already struggling to perform well enough to exceed the target, and thus be eligible
for a reward, she is unlikely to be motivated by doing “more of the same”. Sanctions, on the
other hand, take the form of a reduction in the budget when the agent fails to reach the
target (DOLETA, 2007). It is obvious that such a reduction in resources will be costly to the
agent, as it may lead to local reorganization, and may potentially require downsizing of staff.
Thus, avoiding sanctions are certainly of value to agents, and thus should motivate sufficient
performance to be avoided. On an individual level, as rewards do not directly translate into
individual agent’s financial benefit, we do not expect employees to respond actively to these
current rewards. Although individual caseworkers within the PWS cannot internalize the
rewards of the incentive system, we do expect that they are highly motivated to perform well
enough not to loose their jobs. Thus, for both Job Centers and individual staff, we expect
that the incentive scheme is effective in promoting performance against current PMs beyond
the minimal accepted standard, but not above and beyond the performance target to initiate
rewards.
Secondly, as currently implemented, WIA award grants range from $750,000 to $3,000,000,
depending on the amount of available appropriated funds (DOLETA, 2007), which may fail
to provide sufficiently large rewards. Often, the rewards are relatively similar, irrespective of
states’ size, with eligible states’ awards ranging only from $665,342 to $768,000 in 2011
(DOLETA, 2013b). Given the big variation between states in terms of the program size and
population, the size of rewards may therefore not be equally attractive to all states. Larger or
more resource-rich states with larger budgets are less incentivized by rewards from the
relatively non-proportional incentive scheme. Moreover, rewards under the WIA incentive
scheme are small compared to external funding sources, where qualifying for the external
funding scheme is not explicitly linked to achieving the WIA targets. As an example, the
Ready to Work Partnership offers grants totaling $169,771,960 (DOLETA, 2014a). Other
sources of funding also exist, such as the Workforce Innovation Fund and the Pay for
Success Grants (DOLETA, 2014d), decreasing the relative size of WIA rewards funds, which
is at around $10,000,000 annually (DOLETA, 2012a; 2013b). Thus, the existence of such
external resources could potentially attenuate the motivational power of WIA rewards. This
implies that agents may not adequately be incentivized through rewards to exceed the WIA
performance targets.
Sanctions are imposed based on whether states meet the target, and are calculated as a 1-4%
budget decrease in a following program year (DOLETA, 2007). As previously stated, such a
reduction in resources is likely to have negative consequences that agents highly desire to
28
avoid. Once again, we expect that sanctions are large enough, and sufficiently highly valued,
to incentivize agents to meet their targets. However, the credibility of the threat of financial
sanctions is called into question by the low frequency with which such sanctions are actually
imposed on underperforming states. In recent years, where states have been eligible for
sanctions, the Department has instead opted for a guided path to improvement (L. Murren
2015, Pers. comm., 5 February). While the decision not to impose sanctions on
underperforming states may indeed have been appropriate in context, the fact remains that
the inconsistent application of incentives may lessen their effect as commitment devices.
DOL may not strictly enforce performance on targets due to considerable uncertainty in
setting performance targets, or due to not wanting states to fall into a kind of poverty trap
(low performance begetting even more low performance due to sanctions reducing already-
limited budgets), and so it follows that sanctions may become a less credible threat to the
agents.
Lastly, the timeliness of incentives is important to establish a clear link between the agent’s
efforts and the reward or sanction that follows. As discussed, there are severe data lags in
agents’ reporting to the principal through the performance measurement system. Such data
lags contribute to severe lags in the calculation of rewards and sanctions as well. The
deadline for rewards based on performance for PY11 was June 30, 2013 (DOLETA, 2013b).
Thus, rewards and sanctions for any program year are not implemented until two years later.
This creates a disconnect between performance and incentives. If an agent fails to meet the
target in year one, and then improves significantly and meets the target in the next year, she
may still receive the sanction in year three based on her performance in year one. This will
make it challenging for the agent to continue her improvement and exceed the target in year
four. This lag between performance and financial consequences raises question over the
ability of WIA incentives to achieve the behavioral responses intended.
Key findings:
1. Current performance management tools may address the asymmetry of information
between DOL and its agents through the solicitation of extensive record data and
state reports, so long as that information is timely and accurate.
2. The performance measurement and incentive system’s exclusive focus on
effectiveness objectives may preclude states from pursuing other DOL objectives.
Because targets and incentives are only attached to effectiveness measures, agents are
not able to credibly commit to pursuing all three objectives. This is particularly true
where resources are limited and pursuing all three objectives would come at the
expense of performance on effectiveness.
3. The incentive structure, as is, may not provide agents with sufficient motivation to
align their actions to DOL objectives. Rewards lack proportionality and timeliness
and sanctions are not consistently enforced, reducing their credibility as threats.
29
5. Quantitative Analysis of Public Use
Performance Data
The conclusions from our theoretical analysis raised questions about whether the
performance measures, data, and incentives in the WIA system could be expected to elicit
agent performance in line with the Department’s desired outcomes. We posited that, with
PMs and incentives as they are, agents are unlikely to independently pursue any objective
aside from effectiveness, which is uniquely accounted for by the common measures under
WIA. We also found reason to question whether the incentive system can reliably motivate
agents to pursue even those effectiveness goals that are explicitly measured. In this section,
we empirically investigate how the PMs under WIA measure and affect agent performance
on each of the three Department objectives of effectiveness, efficiency, and equity.
5.1 Analytical Approach and Choice of States
To evaluate how well WIA PMs and data inform the Department on state performance
toward DOL objectives, and how well they work to drive state performance toward those
goals, we use two main data sources: public use Quarter 3, Program Year 2013 Workforce
Investment Act Standardized Record Data (WIASRD) for containing individual-level WIA
participant data, and a panel of aggregated state-level performance data from Years 2008-
2002, which we construct using national summaries of annual performance data, state-
negotiated levels of performance, and annual state funding allocations for the WIA Adult
Program. Using our panel of aggregated state data, we are able to assess trends over time,
while the individual-level WIASRD data allows us to investigate the drivers of performance
in a given year. To this end, we narrow our analytical focus to the Program Year (PY) 2012
exiters only, as these records are the most complete in our dataset, yielding 1.6 million
observations (for a full description of the data consulted in this analysis, please see Appendix
C).
The scope of our empirical investigation is narrowed to a sample of ten states that broadly
represent four geographical regions in the U.S.: Oregon, Washington, and Idaho (West);
Minnesota and Indiana (Midwest); Texas and Alabama (South); New York, Pennsylvania,
and Connecticut (Northeast). The states were selected with the aim of representing the
geographic, demographic, economic, and political diversity of the United States while
allowing for comparisons within and across regions. Consistent with our previous sections,
we limit the scope of our analysis to the Adult Program. Doing so presents two main
advantages. First, the Adult Program is WIA’s largest program, both in terms of funding and
participants served. This provides us with a large sample size to work with. Second, the Adult
Program, designed specifically to prepare low-skilled jobseekers for employment, serves a
range of individuals broader and more diverse than that of any other WIA program. A study
of the Adult Program, therefore, allows us to evaluate how well WIA PMs inform the
30
Department of performance in a context where the stakes are high and the conditions are
challenging1
.
5.2 Effectiveness
To evaluate state performance on the effectiveness objective, we use the data that states
submitted to the Department of Labor to report on its indicators. The Adult Program has
three common measures: (1) entered employment, (2) employment retention, and (3) average
earnings. Performance on any one of these measures is defined as a success rate: the number
of successful interventions divided by the number of total interventions (DOL, 2006)2
. In
this section, we seek to investigate how well these PMs inform the Department on state
performance toward effectiveness objectives, and how well they work to drive state
performance toward such goals.
In our theoretical section, we emphasized that the usefulness of PMs depends on whether
they measure the right things. Therefore, an aim for this section is to understand the degree
to which WIASRD data might meaningfully inform the Department of how successful states
are in promoting the three DOL objectives. We first test, broadly, whether the performance
targets set forth by the Department actually promote better performance. To this end, we
take data from the Adult Program in PY12 (Figure 5.1) to explore the relationship between
the targets and the performance on entered employment. We observe a strong correlation
between the set Adult Entered Employment Rate (AEER) targets and actual performance
for the states. The states selected for our sample, shown in red, closely fit this national trend
line. This correlative relationship could suggest that states do well when they aim to hit the
targets set by DOL. However, we cannot conclude that the targets are causal drivers of
performance. It is possible that the opposite could be true, that it is actually performance
that drives the target levels. Though the second scenario may seem implausible at first, it is
certainly a possibility, due to the fact that the Department has target-setting practices that
incorporate past performance. As the final target level reflects observed performance from
the Program Year from two years past, adjusted for changes in the characteristics of the
individuals served, it could plausibly be the case that targets are simply good predictors of
performance.
1 As not all states completed their reporting prior to the publication of the PY13, Q3 WIASRD data, some states are
absent from the dataset, most notably of which are California and New Jersey, who were both late to report their2 While these measures can speak to the WIA program outcomes, on their own, they cannot inform DOL of program
impact. In order to conduct an analysis of program impact, we would need to compare program outcomes for WIA
participants with outcomes of a control group who did not receive WIA trainings and services. Because WIASRD
contains outcome data only for program participants, such impact analyses are not possible. Researchers continue to
grapple with reliable methods of assessing WIA program impact (for a comprehensive review, see Orr, et al., 2011).
Quasi-experimental methods using supplementary administrative datasets often struggle to establish a valid
counterfactual, while experimental approaches pose both ethical, cost and feasibility concerns. This body of research will
continue to develop insight on the impact of WIA.
31
Figure 5.1 – Relationship Between Targets and Performance, PY12
Source: DOL PY12 WIA State Annual Reports and Summaries
Figures 5.2 to 5.4 show the development of targets, performance, and distance from target
for our ten selected states and the national average for thePY08 to PY12 time period. The
three charts allow us to evaluate the relationship between the set targets and observed
performance, both in absolute terms, and in terms of reaching the set target. In a well-
functioning performance management system, performance targets should drive
performance, so long as proper incentives are in place. However, when looking at target and
performance trends, it is difficult to identify targets as drivers of performance over time.
Looking at the national average of how close actual performance is to performance targets
over time in Figure 5.2, we observe that after an increase in distance from PY08 to PY09,
likely due to the financial crises and subsequent recession, the gap between the AEER and
the targets has slowly closed over time, with the national average performance in PY12
actually exceeding the average target. Initially, this seems like a positive indication of state
performance improvements, but the interpretation of this trend is complicated by the fact
that we observe a steady decline in average performance targets (see Figure 5.3).
Interestingly, while targets have steadily decreased, Figure 5.4 reveals that state performance
has actually increased since 2009.
32
A closer look the states in our sample reveal a similar puzzle. States in Figure 5.2 appear, in
general, to be trending toward decreasing their target shortfalls, with some regularly
exceeding their targets. While this may be driven in part by the steady or slightly upwards
trend in actual performance visible in Figure 5.4, its unclear if states’ close proximity to their
targets in PY12 is driven more by improved performance or by the consistent decreases in
performance targets visible in Figure 5.3.
Figure 5.2 – AEER Distance from Target, PY08-PY12
Source: DOLWIA State Annual Reports and Summaries
33
Figure 5.3 – AAER Negotiated Targets, PY08-PY12
Source: DOL WIA State Annual Reports and Summaries
Figure 5.4 – AEER Actual Performance, PY08-PY12
Source: DOL WIA State Annual Reports and Summaries
34
5.2.1 Improving Effectiveness Measures
While the WIA common measures certainly speak to several of DOL’s effectiveness
objectives, as discussed above, they paint a somewhat confusing picture of state
performance. It is difficult to compare performance across states, given that targets vary
substantially across states, based on such things as past performance and economic and
demographic characteristics. This issue can be addressed, in part, by comparing the gaps
between states’ actual and targeted performance, but it is complicated by large fluctuations in
targets over time, making it difficult to determine how much of the gap is explained by
movement in performance or movement in the target.
Fluctuating performance targets are not necessarily problematic in and of themselves. In fact,
they are to be expected, given that state performance targets are calculated via a regression-
based adjustment system that accounts for changes in the state’s economic climate and WIA
participant demographic characteristics in the previous year and then negotiated with state
officials. Given DOL’s interest in gaining a clear understanding of performance within states,
while still accommodating economic and demographic challenges states may face, it may be
more sensible for DOL to develop a system of weighting performance rather than weighting
targets. This approach has two important advantages. First, it would allow the Department to
create measures of performance that are comparable across states, eliminating the ambiguity
described in the section above. Second, it would allow the Department to account for
relevant state characteristics in the year measured, rather than those in previous years, as is
the case with the current target setting methodology.
5.2.2 Additional Measures of Effectiveness
In addition to weighting performance on common measures rather than performance targets,
DOL may gain better insight into other effectiveness objectives by creating measures with
which to gauge the relevance of WIA trainings and services provided within states. In
attempting to learn more about whether the training provided is related to the category of
employment received, we created a job match variable in the WIASRD dataset, where the
occupational category of training matched the occupational category of job held in the first
quarter after exit.3
However, we find only 23,987 matches out of the almost 1.6 million
individuals in the dataset. In other words, we only have this data for about 3% of registered
individuals. A 2014 GAO report discusses this issue, and provides recommendations on
establishing a performance measure for training-related employment. Such a job match
measure could be a good way to incentivize providing the right skills, rather than just
providing any skills towards WIA participants.
Taking the idea of providing the right skills even further, DOL may do well to measure the
extent to which WIA trainings and services provided to participants are in high demand in
local labor markets. This is a defined objective in DOL’s Strategic Report (2013), but
currently no measure of states’ success on this objective exists. There is a requirement to
3 Using calculated fields 3020 and 3021 to create a dummy variable equal to 1 where values of the two fields match.
35
have a majority of business representation in both state and local WIBs (WIA, 1998, Sec
117), which is supposed to improve the promotion of in-demand labor skills, but we cannot
say whether this is the case. As different skills are likely to be in demand in different LWIAs,
we would like to test whether the right skills are promoted in the implementation of the WIA
locally. This could be done if states were required to record the category of the high-demand
sectors in different LWIAs, and the category of training provided to individuals. Based on
this, a performance indicator that measures the extent to which states are able to provide in-
demand skills could be implemented, by matching the category of training to the category of
in-demand sectors. This would give us a success rate measure of the percentage of
individuals provided with training that provided skills in high demand. The target for such a
measure would need to be related to local conditions, and set in collaboration with local
WIA operators.
Both a job match and a demand-driven skills measure would add significantly to current
effectiveness measures. A job match measure could, in theory, be implemented with data that
is already collected in WIASRD today, but too much data is currently misreported. It seems
too difficult for states and caseworkers to record the 8 digit O*Net code4
for occupation
correctly. This is reflected in the many comments in the WIASRD data booklet, where a
number of data errors for occupational and training category are mentioned (SPRA, 2014). It
is therefore recommended that DOL develop a set of occupational categories that are easier
to understand for those who are asked to record the data. Such categories should be broad
enough to allow easy recording, but narrow enough to separate different industries.
Key findings:
4. While the WIA common measures certainly speak to several of DOL’s effectiveness
objectives, as discussed above, they paint an ambiguous picture of state performance.
It is difficult to compare performance across states, given that state targets vary
substantially, based on such things as past performance and economic and
demographic characteristics. This issue can be addressed, in part, by comparing the
gaps between states’ actual and targeted performance, but this effort is complicated
by large fluctuations in targets over time, making it difficult to determine how much
of the gap is explained by movement in performance or movement in the target.
5. Currently, DOL does not collect sufficient data to gauge the relevance of WIA
trainings and services provided within states. Although DOL requires states and
Local Workforce Investment Boards (LWIBs) to identify the skills and trainings in
high demand by local employers, no measure exists that would allow DOL to
determine the proportion of trainings provided that fulfill this criteria, nor does one
exist that would determine how frequently the jobs that participants enter into post-
training are related to the WIA training they received.
4 The O*Net code is a standardized numbering system for specific professions.
36
5.3 Efficiency
As mentioned in the Department of Labor’s strategic report, states are facing increasing
budgetary constraints (DOL, 2013). Efficiency is, therefore, an increasingly important
objective, as outlined in our introduction. To test whether agents are successful in ensuring
efficiency in their delivery of WIA services, we supplement WIASRD data with state level
WIA Adult Program funding allocation data in order to make an assessment of how
efficiently states deploy their resources to achieve desired outcomes. We calculate a cost per
job (CPJ) measure for this analysis by dividing federal WIA allotments for the different states
by the number of WIA Adult Program exiters who entered employment in the different
states for each program year.
This very simple measure, which is constructed using readily available data in the WIA
national summaries of annual performance data and the WIA adult activities program dollar
tables, allow us to assess states’ efficiency and compare state rankings in terms of CPJ and
AEER in order to gain an understanding of how a potential tradeoff between different
objectives plays out in practice.
Table 5.1– Cost Per Job PY08-PY12, by State
Source: DOL PY12 WIA State Annual Reports and Summaries & DOL WIA State Statutory
Formula Funding
5.3.1 Discussion of Results
Table 5.1 and Figure 5.5 show the CPJ for the ten states in our selection over the PY08-
PY12 time period. At first glance, we see that there are vast differences between the
calculated scores for different states, ranging from $111 USD for Oregon in PY12 to $21,026
USD for Connecticut in 2009. This indicates that states are performing incredibly different in
ensuring efficient delivery of their WIA services. There is also large in-state CPJ variation
over time of up to $9,323 USD. Table 5.2 adds to this story by showing the differences in
overall state ranking when applying CPJ as a measure rather than the traditional success rate
measures for PY12. Idaho, which in PY12 performed better than any other state in terms of
37
AEER, ranks only 39th
when the CPJ measure is applied. On the other hand, one of the
worst states in terms of AEER performance, Oregon, ranks third in terms of CPJ. The
rankings are nearly inverses of one another, although this is less the case when applying this
comparison to all fifty states and the District of Columbia (Appendix D). Thus, we are left
with the impression that the effectiveness scores calculated as part of state reports and
national summaries fail to address efficiency concerns due to the lack of a relevant
performance measure.
Figure 5.5 – Cost per Job, PY08-PY12
Source: DOL PY12 WIA State Annual Reports and Summaries & DOL WIA State Statutory
Formula Funding
Table 5.2 – Cost per Job and AEER Rankings PY12, by State
Source: DOL PY12 WIA State Annual Reports and Summaries & DOL PY12 WIA State
Statutory Formula Funding
38
To better understand what the CPJ measure tells us, we look more closely at the relationship
between CPJ and AEER. Figure 5.6 and 5.7 show CPJ and AEER performance calculated
for the WIA Adult Program in PY12, and provides a picture of divergent performance in
terms of CPJ and AEER. Many of the high CPJ states colored red in Figure 5.6 also have
high AEER, as shown in Figure 5.7. This indicates that states struggle to simultaneously
perform well at both effectiveness and efficiency. Thus, we suspect that there is a trade-off
between performance in terms of CPJ and AEER.
Figure 5.6– State Variation in Cost Per Job, PY12
Legend in $USD (nominal).
Source: DOL PY12 WIA State Annual Reports and Summaries & DOL PY12 WIA State
Statutory Formula Funding
Figure 5.7– State Variation in Adult Entered Employment Rate, PY12
Legend in fraction of participants who entered employment.
Source: DOL PY12 WIA State Annual Reports and Summaries
Figure 5.8 shows a scatterplot of CPJ and AEER for all fifty states and the District of
Columbia in PY12, and suggests a positive relationship between the two variables. Such a
$47 - $4069
$4069 - $8120
$8120 - $10110
$10110 - $24644
53.7% - 67.1%
67.1% - 73.79%
73.79% - 79.2%
79.2% - 86.68%
39
positive relationship indicates a possible trade-off between CPJ and AEER, or, put
differently, between effectiveness and efficiency. It is plausible that states can improve their
AEER by spending more money per participant, thus reducing efficiency. The positive
relationship identified is in accordance with our initial interpretation of the changes in state
rankings when CPJ is applied as a measure. However, such a relationship does not account
for other factors that may be related to both the effectiveness and efficiency of their service
delivery.
Figure 5.8 – Relationship Between Cost per Job and AEER, PY12
Source: DOL PY13, Q3 WIASRD, DOL PY12 WIA State Annual Reports and Summaries
& DOL PY12 WIA State Statutory Formula Funding
One such factor is the extent to which states’ WIA participants are co-enrolled in other job-
training programs and services, such as Wagner-Peyser programs, Trade Adjustment
Assistance Community College and Career Training Grants Program, and other federal and
local initiatives, which may substantially impact cost efficiency measures across states. The
red squares in Figure 5.8 show states with higher than 90% co-enrollment rates in the
Wagner-Peyser program, based WIASRD data. Comparing the incidence of co-enrollment in
states and their CPJ estimates, we observe that several states with high rates of co-enrollment
also tend to be states with low CPJ. Table 5.3 further demonstrates the substantial variation
in Wagner-Peyser co-enrollment alone across states in our sample. However, we also find
that high co-enrollment is not exclusively associated with low CPJ, with Washington and
Texas as examples of states with high co-enrollment and higher levels of CPJ. Still, there is a
clear overrepresentation of states with high co-enrollment rates in the lowest CPJ segment,
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)
USDOL IMPAQ International Capstone Report (Final Client Version) (2)

More Related Content

Viewers also liked

Train the trainer-pass certificate
Train the trainer-pass certificateTrain the trainer-pass certificate
Train the trainer-pass certificateEhab El Barbary
 
www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...
www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...
www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...Video Aulas Apoio
 
Asma ul husna Al Jabbar
Asma ul husna Al JabbarAsma ul husna Al Jabbar
Asma ul husna Al JabbarDr Sameer Ali
 
Evaluation question 6
Evaluation question 6Evaluation question 6
Evaluation question 6Joe Hird
 
Introdução à Biologia
Introdução à BiologiaIntrodução à Biologia
Introdução à BiologiaNathan Aguiar
 
NEW CV 3 - NIÑAJADE REVECHO MILLANO
NEW CV 3 - NIÑAJADE REVECHO MILLANONEW CV 3 - NIÑAJADE REVECHO MILLANO
NEW CV 3 - NIÑAJADE REVECHO MILLANONinaJade Millano
 

Viewers also liked (9)

Train the trainer-pass certificate
Train the trainer-pass certificateTrain the trainer-pass certificate
Train the trainer-pass certificate
 
Que son las TIC, TAC Y TEP ?
Que son las TIC, TAC Y TEP ?Que son las TIC, TAC Y TEP ?
Que son las TIC, TAC Y TEP ?
 
Appx for Developers
Appx for Developers   Appx for Developers
Appx for Developers
 
Pellet Mill C15
Pellet Mill C15Pellet Mill C15
Pellet Mill C15
 
www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...
www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...
www.videoaulagratisapoio.com.br - www.TutoresDePlantao.Com.Br - Física - Exer...
 
Asma ul husna Al Jabbar
Asma ul husna Al JabbarAsma ul husna Al Jabbar
Asma ul husna Al Jabbar
 
Evaluation question 6
Evaluation question 6Evaluation question 6
Evaluation question 6
 
Introdução à Biologia
Introdução à BiologiaIntrodução à Biologia
Introdução à Biologia
 
NEW CV 3 - NIÑAJADE REVECHO MILLANO
NEW CV 3 - NIÑAJADE REVECHO MILLANONEW CV 3 - NIÑAJADE REVECHO MILLANO
NEW CV 3 - NIÑAJADE REVECHO MILLANO
 

Similar to USDOL IMPAQ International Capstone Report (Final Client Version) (2)

CAPA-Overview-PPTSample.pdf
CAPA-Overview-PPTSample.pdfCAPA-Overview-PPTSample.pdf
CAPA-Overview-PPTSample.pdfVishalNair46
 
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-461506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4Alexander Hamilton, PhD
 
Vendor Performance Management
Vendor Performance ManagementVendor Performance Management
Vendor Performance ManagementGerald Ford
 
Police and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation StudyPolice and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation StudyInteract Business Group
 
Education strategic plan case
Education strategic plan   caseEducation strategic plan   case
Education strategic plan caseManuela Marinescu
 
192020 Capella University Scoring Guide Toolhttpsscor.docx
192020 Capella University Scoring Guide Toolhttpsscor.docx192020 Capella University Scoring Guide Toolhttpsscor.docx
192020 Capella University Scoring Guide Toolhttpsscor.docxaulasnilda
 
The Management of Critical Spares in the Electric & Gas Utility Industry
The Management of Critical Spares in the Electric & Gas Utility IndustryThe Management of Critical Spares in the Electric & Gas Utility Industry
The Management of Critical Spares in the Electric & Gas Utility IndustryScottMadden, Inc.
 
LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC
LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC
LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC csandit
 
Setting Up the Development Performance Evaluation System Overview on Current ...
Setting Up the Development Performance Evaluation System Overview on Current ...Setting Up the Development Performance Evaluation System Overview on Current ...
Setting Up the Development Performance Evaluation System Overview on Current ...Dadang Solihin
 
CENTRICA Financial Account Report
CENTRICA Financial Account ReportCENTRICA Financial Account Report
CENTRICA Financial Account ReportAdam Ariff
 
Performance Measurement Summit
Performance Measurement SummitPerformance Measurement Summit
Performance Measurement SummitPeter Stinson
 
Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...
Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...
Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...Sanjan Haque
 
Mand e tools_methods_approaches
Mand e tools_methods_approachesMand e tools_methods_approaches
Mand e tools_methods_approachesBasu Dev Ghimire
 
Mand e tools_methods_approaches
Mand e tools_methods_approachesMand e tools_methods_approaches
Mand e tools_methods_approachesDeryc Luima
 

Similar to USDOL IMPAQ International Capstone Report (Final Client Version) (2) (20)

CAPA-Overview-PPTSample.pdf
CAPA-Overview-PPTSample.pdfCAPA-Overview-PPTSample.pdf
CAPA-Overview-PPTSample.pdf
 
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-461506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
61506_Capstone_Report_DFID_FINAL_Quantifying_Governance__Indicators-4
 
Oei 07-98-00260
Oei 07-98-00260Oei 07-98-00260
Oei 07-98-00260
 
BKD Operational Assessment
BKD Operational AssessmentBKD Operational Assessment
BKD Operational Assessment
 
Vendor Performance Management
Vendor Performance ManagementVendor Performance Management
Vendor Performance Management
 
ISPMS Background, Purpose and Approach
ISPMS Background, Purpose and ApproachISPMS Background, Purpose and Approach
ISPMS Background, Purpose and Approach
 
DEVELOPMENT AND MANAGEMENT
DEVELOPMENT AND MANAGEMENTDEVELOPMENT AND MANAGEMENT
DEVELOPMENT AND MANAGEMENT
 
Police and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation StudyPolice and Fire On-Line Courseware Training Trends and Evaluation Study
Police and Fire On-Line Courseware Training Trends and Evaluation Study
 
Software testing services growth report oct 11
Software testing services growth report oct 11Software testing services growth report oct 11
Software testing services growth report oct 11
 
Education strategic plan case
Education strategic plan   caseEducation strategic plan   case
Education strategic plan case
 
MBA Final Project
MBA Final ProjectMBA Final Project
MBA Final Project
 
192020 Capella University Scoring Guide Toolhttpsscor.docx
192020 Capella University Scoring Guide Toolhttpsscor.docx192020 Capella University Scoring Guide Toolhttpsscor.docx
192020 Capella University Scoring Guide Toolhttpsscor.docx
 
The Management of Critical Spares in the Electric & Gas Utility Industry
The Management of Critical Spares in the Electric & Gas Utility IndustryThe Management of Critical Spares in the Electric & Gas Utility Industry
The Management of Critical Spares in the Electric & Gas Utility Industry
 
LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC
LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC
LEAN LEVEL OF AN ORGANIZATION ASSESSED BASED ON FUZZY LOGIC
 
Setting Up the Development Performance Evaluation System Overview on Current ...
Setting Up the Development Performance Evaluation System Overview on Current ...Setting Up the Development Performance Evaluation System Overview on Current ...
Setting Up the Development Performance Evaluation System Overview on Current ...
 
CENTRICA Financial Account Report
CENTRICA Financial Account ReportCENTRICA Financial Account Report
CENTRICA Financial Account Report
 
Performance Measurement Summit
Performance Measurement SummitPerformance Measurement Summit
Performance Measurement Summit
 
Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...
Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...
Analysing Oxfam Viet Nam’s Participatory Poverty Mapping Analysis project usi...
 
Mand e tools_methods_approaches
Mand e tools_methods_approachesMand e tools_methods_approaches
Mand e tools_methods_approaches
 
Mand e tools_methods_approaches
Mand e tools_methods_approachesMand e tools_methods_approaches
Mand e tools_methods_approaches
 

USDOL IMPAQ International Capstone Report (Final Client Version) (2)

  • 1. U.S. Department of Labor (DOL) Employment and Training Administration (ETA) Occasional Paper Series (ETA-2015) Making Performance Measures Work in U.S. Workforce Development Programs Prepared by: Allison Dunatchik, Jan Thomas Hagen, Stephanie Koo, Jung-A Lee & Malemnganba Chenglei Wairokpam London School of Economics and Political Science (London, England) March 26th , 2015 This paper was funded, either wholly or in part, with Federal funds from the U.S. Department of Labor, Employment and Training Administration (ETA) under Contract Number DOLQ121A21885, Task Order DOLU121A21908. The contents of this publication do not necessarily reflect the views or policies of the Department, nor does mention of trade names, commercial products, or organizations imply endorsement of same by the U.S. Government. This paper was one of two prepared under the 2015 ETA Research Papers Program, which awarded funding to graduate students from two Capstone Teams from two renowned International Schools: The London School of Economics and Political Science from England; and The Institut d'études politiques de Paris from France to conduct original research on topics relating to the workforce investment system.
  • 2. 1 Abstract The recent introduction of the Workforce Innovation and Opportunity Act, a new piece of legislation to supersede WIA and alter DOL’s performance management system, makes a systematic assessment of the strengths and weaknesses of the current WIA PMS timely, relevant, and important. This report evaluates the success of three key elements of the WIA performance management system – (1) performance measurement, (2) performance data, and (3) performance incentives – in promoting DOL’s objectives: effectiveness, efficiency, and equity in the delivery of WIA services and training. Employing a three-pronged analysis of WIA performance management tools – a study of the relevant literature, the application of a theoretical model, and an empirical investigation – we draw ten key findings: 1. Current WIA performance management tools may sufficiently inform DOL of performance on effectiveness, efficiency and equity goals, so long as that information is timely and accurate. 2. The performance measurement and incentive system’s exclusive focus on effectiveness objectives may preclude states from pursuing other DOL objectives. 3. The WIA performance incentive structure may not provide agents with sufficient motivation to align their actions to DOL objectives. 4. The WIA common measures paint an ambiguous picture of state performance on effectiveness goals. 5. Currently, DOL does not collect sufficient data to gauge the relevance of WIA trainings and services provided within states. 6. States vary vastly in efficiency according to our CPJ measure. We find that several drivers of this variation may be at work. 7. Investigation into state WIA Annual Reports reveals a substantial demand for further development in measuring and understanding cost efficiency and its drivers. 8. Despite DOL’s explicit equity objective, individuals with hard-to-serve characteristics are not generally more likely to receive training. 9. DOL’s data reporting procedures produce substantial data lags, which hinder the Department’s ability to observe current state and local performance and limit the impact of the performance management system on program accountability. 10. The volume of inconsistencies and errors contained in WIA record data draws into question the extent to which the data can be used to draw conclusions about WIA performance that are reliable and comparable across states. Based on this evaluation, we provide seven practical recommendations for how DOL might improve performance management tools under WIOA to optimize their impact on program outcomes.
  • 3. 2 Table of Contents Abstract.......................................................................................................................... 1 List of Figures................................................................................................................ 4 List of Tables ................................................................................................................. 4 List of Abbreviations...................................................................................................... 5 Executive Summary....................................................................................................... 6 1. Introduction...........................................................................................................11 1.1 Research Aim.......................................................................................................12 1.2 Background on the WIA Programming and the Public Workforce System .......12 1.3 Report Outline.....................................................................................................14 2. Effectiveness, Efficiency, and Equity ...................................................................15 3. Literature Review...................................................................................................17 3.1 Influencing Behavior through Performance Measurement and Incentives .......17 3.2 Performance Measurement in the Public Sector ................................................19 3.2.1 The Effectiveness of Performance Measures in the Public Sector.......................... 20 3.3 Performance Measurement at the U.S. Department of Labor........................... 20 4. Theoretical Model................................................................................................. 22 4.1 Principal-Agent Framework ............................................................................... 22 4.2 Principals and Agents in PWS and their Interests............................................. 22 4.3 Reducing Asymmetries of Information.............................................................. 23 4.3.1 WIASRD Data ................................................................................................................. 24 4.3.2 Annual Report Requirements......................................................................................... 24 4.4 Aligning Agent Incentives.................................................................................. 25 4.4.1 Commitment Problems................................................................................................... 25 4.4.2 Evaluating the WIA Incentive Scheme ........................................................................ 26 5. Quantitative Analysis of Public Use Performance Data ...................................... 29 5.1 Analytical Approach and Choice of States ......................................................... 29 5.2 Effectiveness....................................................................................................... 30 5.2.1 Improving Effectiveness Measures............................................................................... 34 5.2.2 Additional Measures of Effectiveness .......................................................................... 34 5.3 Efficiency............................................................................................................ 36 5.3.1 Discussion of Results ...................................................................................................... 36 5.4 Equity ................................................................................................................. 45 5.4.1 Discussion of Results ...................................................................................................... 48 6. Data considerations.............................................................................................. 54 6.1 Time Lags........................................................................................................... 54 6.2 Inconsistencies in Reporting ............................................................................. 54 7. Summary and Recommendations ........................................................................ 57
  • 4. 3 Works Cited.................................................................................................................. 60 Appendix...................................................................................................................... 64 Appendix A – Terms of Reference........................................................................... 64 Appendix B – Key Definitions and Outtakes .......................................................... 68 Appendix C – Quantitative research and data......................................................... 73 Appendix D – State Performance, Distance from Target, and Cost-Per-Job Rankings .................................................................................................................. 77 Appendix E – Results from Logit Regression Specification ................................... 78 Appendix F – WIA PY12 Individual Level Performance Data Codebook............... 79
  • 5. 4 List of Figures Figure 1.1 – The Public Workforce System.................................................................................... 13   Figure 4.1 – The Principals and Agents of the WIA Title IB Adult Program and their Respective Interests.................................................................................................. 23   Figure 4.2 – Focusing Agent Efforts on Effectiveness ................................................................ 26   Figure 5.1 – Relationship Between Targets and Performance, PY12 ........................................ 31   Figure 5.2 – AEER Distance from Target, PY08-PY12 .............................................................. 32   Figure 5.3 – AAER Negotiated Targets, PY08-PY12................................................................... 33   Figure 5.4 – AEER Actual Performance, PY08-PY12................................................................. 33   Figure 5.5 – Cost per Job, PY08-PY12........................................................................................... 37   Figure 5.6 – State Variation in Cost Per Job, PY12 ...................................................................... 38   Figure 5.7 – State Variation in Adult Entered Employment Rate, PY12.................................. 38   Figure 5.8 – Relationship Between Cost per Job and AEER, PY12.......................................... 39   Figure 5.9 – Cost Per Job, Western States, PY08-PY12............................................................... 41   Figure 5.10 – Cost Per Job, Eastern States, PY08-PY12.............................................................. 42   Figure 5.11 – Cost Per Job, Midwestern States, PY08-PY12 ...................................................... 43   Figure 5.12 – Cost per Job, Southern States, PY08-PY12 ........................................................... 44   List of Tables Table 4.1 – DOL Objectives............................................................................................................. 24   Table 4.2 – WIA Adult Program Common Measures.................................................................. 25   Table 5.1 – Cost Per Job PY08-PY12, by State............................................................................. 36   Table 5.2 – Cost per Job and AEER Rankings PY12, by State .................................................. 37   Table 5.3 – Co-Enrollment and Cost per Job by State................................................................. 40   Table 5.4 – PY12 Proportion of Individuals with Hard-to-Serve Characteristics.................... 41   Table 5.5 – Proportion of Individuals with Hard-to-Serve Characteristics, Eastern States ... 42   Table 5.6 – PY12 Proportion of Individuals with Hard-to-Serve Characteristics, Midwestern States...................................................................................................................... 43   Table 5.7 – PY12 Proportion of Individuals with Hard-to-Serve Characteristics.................... 44   Table 5.8 – Likelihood of Selection into WIA Training Based on Personal Characteristics, US as a whole............................................................................................................................. 48   Table 5.9 – Likelihood of Selection into WIA Training Based on Personal Characteristics.. 50   Table 5.10 – Likelihood of Selection into WIA Training Based on Personal Characteristics 51   Table 6.1 – Reporting Errors and Inconsistencies in WIA Participant Demographic Information....................................................................................................... 55  
  • 6. 5 List of Abbreviations AEER Adult Entered Employment Rate AJC American Job Centre CPJ Cost Per Job DOL Department of Labor ETA Employment and Training Administration ETP Eligible Training Provider GRPA Government and Performance Results Act ITA Individual Training Account JTPA Job Training Partnership Act LWIA Local Workforce Investment Area LWIB Local Workforce Investment Board LPM Linear Probability Model PM(s) Performance Measure(s) PMS Performance Management System PWS Public Workforce System PY Program Year SWIB State Workforce Investment Board WIA Workforce Investment Act WIASRD Workforce Investment Act Standardized Record Data WIB Workforce Investment Board WIOA Workforce Innovation and Opportunities Act
  • 7. 6 Executive Summary Since 1998, programs under the Workforce Investment Act (WIA) have served as the United States Department of Labor’s primary vehicle for providing job training and employment services to American jobseekers. WIA programs seek to place Americans looking for work into well-paid jobs and supply employers with well-skilled, well-prepared workers. To this end, the WIA programs provide individuals with tailored services, from employment search assistance to skills gains, to ensure their job market success. Recognizing the need for job training and services that are locally relevant and in demand, DOL grants state and local agencies substantial control over the design and implementation of federally funded WIA programs. To keep tabs on these highly decentralized programs, DOL has long employed a performance management system (PMS) for WIA funded programs with the aim of ensuring accountability, encouraging high quality of services, and monitoring the performance of these locally implemented programs. When crafted and administered well, performance management systems can be powerful tools for ensuring that employees pursue desired objectives. Drawing from WIA legislation and DOL strategic documents, we define DOL program objectives along three broad categories: effectiveness, efficiency, and equity in the delivery of WIA services. These goals are further expanded in the figure below. DOL Objectives The recent introduction the Workforce Innovation and Opportunity Act, a new piece of legislation to supersede WIA and alter DOL’s performance management system, makes a systematic assessment of the strengths and weaknesses of the current WIA PMS timely, relevant, and important. This report evaluates the success of three key elements of the WIA performance management system – (1) performance measurement, (2) performance data, (3) and performance incentives – in promoting DOL’s objectives. Based on this evaluation, we provide recommendations for how DOL might improve these management tools to optimize their impact on program outcomes. Management Systems, Performance, and Outcomes We begin with a review of literature from public administration, economics, and management to better understand the landscape of performance management systems in the public sector, paying specific attention to the role of performance measurement and incentives. This investigation brings to light the mechanisms through which performance
  • 8. 7 managements systems influence program outcomes. Performance measures (PMs) define the employee performance dimensions or results of interest to managers. Performance data produced through PMs inform managers of employee performance. An incentive system allows managers to link performance results to rewards or sanctions with the aim of reinforcing good behavior and disincentivizing bad behavior. Problems may arise, however, when PMs do not measure the right things, data is unreliable, or incentives lead to distorted behavior. A study of DOL performance management systems in the past reveals each of these issues to have occurred. Principals, Agents and Interests Next, we develop a theoretical model that frames the relationship between DOL and state and local WIA program actors from a principal-agent perspective. We propose that the decentralized nature of WIA programming produces a principal-agent problem in two ways: the agents (state and local actors) may have interests distinct from the principal’s (DOL’s) objectives, and asymmetries of information exist such that the principal cannot directly observe the actions of the agent. These two issues may lead to moral hazard, where agents pursue their own interests at the expense of the principal’s defined objectives of effectiveness, efficiency, and equity. With these issues in mind, we assess the extent to which WIA PMs and data sufficiently reduce asymmetries of information, and whether WIA performance incentives adequately align agent incentives to promote the principal’s objectives. Key findings: 1. Current performance management tools may address the asymmetry of information between DOL and its agents through the solicitation of extensive record data and state reports, so long as that information is timely and accurate; 2. The performance measurement and incentive system’s exclusive focus on effectiveness objectives may preclude states from pursuing other DOL objectives. Because targets and incentives are only attached to effectiveness measures, agents are not able to credibly commit to pursuing all three objectives. This is particularly true where resources are limited and pursuing all three objectives would come at the expense of performance on effectiveness. 3. The incentive structure, as is, may not provide agents with sufficient motivation to align their actions to DOL objectives. Rewards lack proportionality and timeliness and sanctions are not consistently enforced, reducing their credibility as threats. Evaluating Effectiveness, Efficiency and Equity Expanding on the conclusions drawn in our theoretical model, we conduct an empirical investigation of state-level performance on each of the three Department objectives. Using Program Year 2013, Q3 WIASRD individual-level data and a constructed panel of aggregated state-level performance data for Program Years 2008-2012, we evaluate how well WIA performance measures and data inform the Department of state performance on DOL objectives, and how well they work to drive state performance toward those goals. Effectiveness
  • 9. 8 We evaluate how well current WIA PMs promote DOL’s effectiveness objectives by analyzing state performance compared to their performance targets over time. We construct a measure to indicate how closely states perform to their projected target and observe trends over time and across states in an effort to gauge effectiveness performance. Key findings: 4. While the WIA common measures certainly speak to several of DOL’s effectiveness objectives, as discussed above, they paint an ambiguous picture of state performance. It is difficult to compare performance across states, given that state targets vary substantially, based on such things as past performance and economic and demographic characteristics. This issue can be addressed, in part, by comparing the gaps between states’ actual and targeted performance, but this effort is complicated by large fluctuations in targets over time, making it difficult to determine how much of the gap is explained by movement in performance or movement in the target. 5. Currently, DOL does not collect sufficient data to gauge the relevance of WIA trainings and services provided within states. Although DOL requires states and Local Workforce Investment Boards (LWIBs) to identify the skills and trainings in high demand by local employers, no measure exists that would allow DOL to determine the proportion of trainings provided that fulfill this criteria, nor does one exist that would determine how frequently the jobs that participants enter into post- training are related to the WIA training they received. Efficiency To test whether agents are successful in ensuring efficiency in their delivery of WIA services, we calculate a Cost Per Job (CPJ) measure by dividing annual federal WIA allotments by the number of WIA Adult Program exiters who entered employment in the different states for each program year. We compare CPJ within and across regions of the country. Key findings: 6. States vary vastly in efficiency according to our CPJ measure. We find that several drivers of this variation may be at work: a. Co-enrollment rate: the extent to which states’ WIA participants are co- enrolled in other non-WIA job-training programs and services may substantially impact cost efficiency across states. In general, but not always, we observe a correlation between high co-enrollment and low CPJ. b. Prioritization of equity: the proportion of individuals with substantial barriers to employment, such as those with disabilities or limited English proficiency, has consequences for cost efficiency. We find that states serving greater proportions of these individuals tend to have higher CPJ numbers. c. Prioritization of effectiveness: states placing emphasis on effectiveness PMs may do so at the expense of efficiency, spending more money per participant
  • 10. 9 to increase effectiveness. We find a weak correlation between high performance and high CPJ. d. Cost of local training programs: large variations exist in the costs of the types of trainings that states and LWIBs provide. However, these costs are not recorded in WIASRD data or in any publically available DOL source, precluding a systematic analysis of these costs. 7. Investigation into state WIA Annual Reports reveals a substantial demand for further development in measuring and understanding cost efficiency and its drivers. Several states have begun to develop measures of Return on Investment (ROI) from WIA training programs, although methodologies are starkly different across states. Equity To test whether states seem to prioritize those individuals with hard-to-serve characteristics (those over 55 years of age, high school dropouts, individuals with a disability, single parents, individuals with limited English proficiency and individuals with low income) in the delivery of WIA services, we run a linear probability model on several states to estimate how having hard-to-serve characteristics influences an individual’s probability of being selected into WIA training, the highest and most resource-intensive level of WIA services. We also re-run our tests as logit models to check for robustness, which confirm our results. If states are pursuing DOL’s equity objective, we should observe that having hard-to-serve characteristics increases the likelihood of being selected into training. Key finding: 8. Despite DOL’s explicit equity objective, individuals with hard-to-serve characteristics are not generally more likely to receive training. This may both indicate a failure to promote equity, and state-level differences in priorities of hard-to-serve individuals from different agents. A measure of equity could facilitate such priorities, and increase accountability towards this objective, while leaving room for local considerations. The performance measurement system currently in place under WIA does however not appear to promote the Department’s goal of equitable service delivery. Data Considerations Revisiting our previous caveat that the usefulness of WIA’s performance measurement is contingent upon the quality of data collected, we provide a brief assessment of the reliability of WIASRD data. Key findings: 9. Data reporting procedures produce substantial data lags. These delays hinder DOL’s ability to observe current state and local performance, limiting the impact of the performance management system on program accountability. Because of such substantial data lags, DOL is also unable to provide timely rewards and sanctions, which is necessary for incentives to be fully effective. 10. In conducting a quantitative analysis of our sample of ten states, we find a concerning number of data collection and reporting inconsistencies. This volume of
  • 11. 10 inconsistencies and errors draws into question the extent to which the data can be used to draw conclusions about WIA performance that are reliable and comparable across states. Recommendations Based on our analysis and key findings, we have identified several potential pathways to increase the effectiveness of the current PMS. The following recommendations could prove particularly effective if considered in relation to the upcoming implementation of WIOA. The Department of Labor should: 1. Consider weighting state and local ex-post performance rather than weighting performance targets. This method would better gauge effectiveness performance and encourage states to pursue equity objectives. 2. Create and make use of a training and job category match measure that would reveal important information about the ability of states and LWIBs to provide relevant training. Such a measure would place greater emphasis on providing in-demand skills. The Department should also consider developing a demand-driven skills training match measure. 3. Develop and make use of a standardized efficiency measure. States are currently developing ROI models of their own, and such a measure could be based on these. However, a simple cost per job measure would be easy to implement, and effective at measuring efficiency. A standardized measure developed by DOL would also allow for comparison of states efficiency in service delivery. 4. Collect and report individual-level costs of training programs and intensive services in WIASRD. This would allow for more precise calculations of cost per successful intervention, which could form an easy to implement standardized efficiency measure as described above. 5. Ensure that incentives are administered in a timely and consistent way to establish a clear link between performance measures and incentives. When agents clearly understand how and why their performance is being rewarded or sanctioned, incentives can be more effective at inducing desirable and expected behavioral responses. Timely grants allow the agents to consciously evaluate their performance and identify more directly the changes needed for improvement. 6. Incorporate efforts to improve data quality into the performance goals and the incentive structure. The assessment of performance is based on the data collected. The Department should give greater priority to improving data infrastructure and training during the transition to WIOA. It should also take advantage of the legislative transition phase to facilitate the improvement in data collection and reporting procedures. 7. Actively support and promote any study that makes use of the introduction of WIOA to assess the impact of WIA job training programs. The Department should aim to collect annual WIASRD panel data to facilitate future studies of impact.
  • 12. 11 1. Introduction The Public Workforce System (PWS), a network of federal, state, and local agencies headed by the US Department of Labor (DOL), aspires to place American jobseekers into well-paid work and supply employers with well-skilled, well-prepared workers. To this end, the PWS provides individuals with tailored services, from employment search assistance to skills gains, to ensure their job market success. The provision of such services by the System produces better-trained and better-skilled workers for employers. The Workforce Investment Act (WIA) of 1998 has been an instrumental vehicle for job training and employment services within PWS, but is to be superseded by the Workforce Innovation and Opportunity Act (WIOA) of 2014. With program delivery highly decentralized from Department control, DOL has long employed a performance management system (PMS) in its federally funded employment and job training programs with several aims in mind: to ensure accountability and quality in the local delivery of federal programs while allowing for flexibility, to monitor and inform DOL of state and local workforce training activities, and to create an incentive system through which DOL can drive high performance by allocating rewards to well-performing states and sanctions to poor-performing states. The WIA PMS consists of six main elements (Otley, 1999; National Performance Review, 1997): 1. Performance planning: defining and setting performance objectives, goals, and metrics. 2. Quality management: quality assurance, quality control, and inspections. 3. Performance measurement: designing a monitoring program and monitoring frequency. 4. Performance data: assessing and validating collected performance data, evaluating and utilizing performance information. 5. Performance feedback mechanisms: performance assessment, feedback and feed- forward loops that enable the organization to learn from its experience and adapt its current behavior and resulting sanctions and rewards. 6. Corrective actions/performance improvement planning: post report planning. The implementation of WIOA promises to make substantial adjustments to the current the PMS, particularly with regard to performance measurement. This may have important implications for program outcomes. Although a common saying about performance measurement is “what gets measured gets done,” the relationship between performance measurement and program outcomes is not well understood. As such, this legislative change to WIOA marks an important opportunity to take stock of the strengths and weaknesses of the current PMS under WIA, with specific attention to PMs, to inform the design and implementation of performance management tools under the new act.
  • 13. 12 1.1 Research Aim Our aim in this research is three-fold. First, we seek to shed light on the relationship between three key elements of the WIA performance management system - (1) performance measurement; (2) performance data; (3) and performance feedback mechanisms - and program outcomes. To accomplish this, we narrow our focus to a specific WIA workforce training program, the Title IB Adult Program, and critically assess the PMs DOL currently employs to track program performance, the performance incentives attached to these measures, and the data generated in the process in an effort to determine the usefulness of these controls in eliciting the Department’s desired program outcomes. This assessment informs our second aim: to identify methods of optimizing the impact of performance measures, data, and incentives on program outcomes. We draw on our analysis of these public sector controls under WIA to provide DOL recommendations for how they might be improved in WIOA to better influence program outcomes. Our final aim is to raise important questions about the PMS employed by the Department for future consideration. Through our investigation, we hope to provide a departure point for subsequent research, and shape an agenda for researchers at DOL looking at the effectiveness of the system. 1.2 Background on the WIA Programming and the Public Workforce System Before any analysis of the system, it is necessary to gain an understanding of WIA, how it is delivered, and what constitutes the Title IB Adult Program. This section provides a brief background to that end. The Workforce Investment Act (WIA) of 1998 is a federal act that provides workforce investment activities through state and local workforce investment systems to: (1) increase employment, job retention, earnings, and skills attainment, (2) improve the quality of the workforce, (3) reduce welfare dependency, and (4) enhance the productivity and competitiveness of the nation. The Act creates sixteen federally funded workforce development programs that span four federal departments, including the Department of Labor (Blank et al., 2011). The workforce development programs run by DOL are delivered through the Public Workforce System (PWS), which is a network of federal, state and local offices that are federally funded and locally implemented. Figure 1.1 illustrates the hierarchy and role of each tier of actors in this network. The Department of Labor heads the PWS and is responsible for administrative oversight, funding and research, and policy guidance. State Workforce Investment Boards (SWIBs) sit below DOL in the PWS hierarchy, and are charged with setting state-level strategic objectives that reflect both local interests and federal goals. Both SWIBs and Local Workforce Investment Boards (LWIBs) are comprised of representatives from businesses, labor organizations, educational institutions, and community organizations. LWIBs are accountable to SWIBs and responsible for setting local strategic direction, establishing funding priorities, and determining what kind of job- and skills-training
  • 14. 13 American Job Centers (AJCs) should deliver. The LWIBs also determine how many AJCs are required locally, where these should be located, and how they will operate. Finally, AJCs are responsible for implementing WIA programming, and provide jobseekers with the resources and trainings needed to find employment. AJCs are also responsible for the delivery of other federal, state, and local job training programs apart from WIA. Figure 1.1– The Public Workforce System This report focuses on the WIA Title IB Adult program, which is available to all individuals over the age of 18, constituting one of the largest federally funded employment and training programs in the United States. Adult Program services are provided through One Stop Career Centers. These services are provided in three tiers of progressing intensity: Core, Intensive, and Training services. Participants must first utilize Core services before being deemed eligible for Intensive services, and utilize Core and Intensive services before receiving Training. Core services, which include job search services, tools, and labor market information, are available to all jobseekers. Intensive services are available to individuals requiring assistance beyond Core services (as determined by AJC caseworkers), and include more comprehensive assessments of individuals’ skills and the development of individualized employment plans, counseling, and career planning. Finally, Training services are provided for those requiring the most assistance. Training services include both occupational and basic skills training. The menu of training programs available to WIA Adult Program participants is determined at the LWIB level, and is provided by select local Eligible Training Providers (ETPs), such as local community colleges, which are approved at the local and state levels. WIA Title IB funds these training programs (or a portion of them) through Individual USDOL Provides administrative oversight, funding, Provides administrative oversight, funding, research, policy, and guidance to State WIBs State WIBs Develops state level strategic plans and funding priorities Local WIBs Develops local level strategic plans, funding priorities and program training content Local American Job Centres Delivers WIA services and contracts training
  • 15. 14 Training Accounts (ITAs) that are set up for participants who then select an appropriate training program from an ETP. 1.3 Report Outline Our evaluation of performance measures, incentives, and data under the WIA PMS is comprised of six parts. First, we define DOL’s objectives through which we assess current performance management. Second, we provide a comprehensive literature review of public sector PMs and their role in broader public sector PMSs. Third, we develop a theoretical model to explore how current DOL PMs and incentives influence agency behavior and program outcomes. Fourth, we explore the relationship between performance indicators and three primary DOL objectives using data from DOL Employment and Training Administration (ETA) website and the WIASRD public use dataset. Fifth, we discuss performance data issues encountered in our analysis and their implications for the effectiveness of the PMS. Finally, based on our analysis, we provide a set of recommendations for future improvements of DOL performance management.
  • 16. 15 2. Effectiveness, Efficiency, and Equity In order to assess the relationship between WIA performance measures, incentives, and program outcomes, we must first clearly establish the outcomes and objectives of interest to DOL. This section draws from WIA legislation and DOL strategic planning documents to outline the primary goals of WIA programming. WIA is clear in the intended goals of the employment and job training programs and services outlined in the Act, stating in the first paragraph: “The purpose of this subtitle is to provide workforce investment activities, through statewide and local workforce investment systems, that increase the employment, retention, and earnings of participants, and increase occupational skill attainment by participants, and, as a result, improve the quality of the workforce, reduce welfare dependency, and enhance the productivity and competitiveness of the Nation (WIA, 1998, Sec 106).” Despite the clarity and directness of these stated goals, the approach that states and local agencies should adopt in achieving them is less clear. Taken at face value, these goals may suggest an approach that entails focusing training and resources on those most likely to succeed, and thus most likely to positively influence performance measures. Such an approach speaks to the prioritization of effectiveness in achieving the desired outcomes, and a prioritization of efficiency in maximizing performance given a certain programming budget allocation. Indeed, in outlining its performance management system, WIA describes the system’s intention to, “to promote the efficiency and effectiveness of the statewide workforce investment system in improving employability for jobseekers and competitiveness for employers (WIA, 1998, Sec 136 (e) – (3)).” In addition to efficiency and effectiveness goals, WIA also reveals concerns about equity in the delivery of training services. In detailing how WIA program applicants should be selected into services, WIA stipulates that, where funds are limited, “priority shall be given to recipients of public assistance and other low-income individuals for intensive services and training services (WIA, 1998, Sec 134 (d) – (4)(E)).” It further explains that, while most skills training activities should link to employer demand, states may create additional services to serve: “Special participant populations that face multiple barriers to employment … [which may include] … I) individuals with substantial language or cultural barriers; II) offenders; III) homeless individuals; IV) other hard-to-serve populations as defined by the Governor involved (WIA, 1998, Sec 134 (d) – (4)(G)(iv)).”
  • 17. 16 The effectiveness, efficiency, and equity goals outlined in WIA have been reiterated in subsequent DOL strategy documents. In DOL’s Strategic Plan for Fiscal Years 2014-2018, the Department details WIA’s contribution to the Department’s mission to “promot[e] and protect opportunity for all workers and their employers(DOL, 2013, p. iii)” by preparing workers for better jobs (DOL, 2013). To do this, the Department charges the ETA with the responsibility to “advance employment opportunities for US workers in 21st century demand sectors and occupations using proven training models and through increased employer engagement and partnerships” (DOL, 2013, p. 11) through WIA programming. It also recommits to equity concerns, specifying that the Department should maintain a “focus on the hardest to serve populations, assuring that these groups expand their economic opportunities and do not get left behind (DOL, 2013, p. 15)”. Given the Department’s consistent advocacy of all three goals of effectiveness, efficiency, and equity in the delivery of WIA services, we adopt an approach that evaluates whether PMs, performance data, and incentives work to promote these objectives. We use a principal-agent framework to assess whether the Department’s three goals conflict with one another in influencing the behavior of state and local agents, and assess whether the WIA performance measurement system elicits an over emphasis of any of the three goals at the expense of others. While effectiveness-efficiency-equity trade-offs are common in organizations with finite resources, we find that because performance standards and performance incentives are linked exclusively to effectiveness goals, state and local agents have little motivation to exert and resources pursuing equity and efficiency goals.
  • 18. 17 3. Literature Review 3.1 Influencing Behavior through Performance Measurement and Incentives Performance measures (PMs), data, and incentives are three important elements of a broader management control system, which exist to manage employee behaviors, guiding them to the “right” behaviors toward the achievement of organizational goals. PMs provide measurable, observable information about the results of an employee’s actions that allow evaluators to observe their progress toward performance targets. The existence of performance data allows managers to link the results of employee efforts to an incentive system of rewards and sanctions. Incentives linked to results encourage employees to see beyond the actions they take, and focus, instead, on the consequences of their actions. Collecting data on performance serves three main aims and purposes, all of which help facilitate performance improvement. For one, it allows for the identification of the best and worst performers, allowing managers to, in theory, find the best practices of the best performers and improve the performance of the worst performers. Secondly, it allows managers to identify which dimensions an organization is struggling in. That is, PMs can identify what organizations need to work on trying to improve. And finally, being able to chart performance allows employers to monitor the efficient use of scarce financial resources (Burgess et al., 2002). Theoretically, performance measurement and incentive systems can influence results, as long as three basic conditions are met: (1) both the employees and the organization know what the desired results are, (2) employees are actually able to influence the results for which they are being held accountable, and (3) the results being controlled are effectively measurable. For results controls to actually evoke the “right” behaviors, or the behaviors desired by the managers, an additional set of conditions must be met. PMs must be (1) congruent, (2) precise, (3) objective, (4) timely, and (5) understandable (Osborne et al., 1995; Merchant and Van der Stede, 2011) and incentive should be (6) valuable to the agent, (7) large enough and visible enough to influence behavior, and (8) administered in a timely manner (Osborne et al., 1995; Merchant and Van der Stede, 2011). First, the designer of a management control system must make sure that a results control is congruent with the ultimate goals of the organization by carefully choosing the right results to measure and placing appropriate weights on the different measured areas, so as to avoid a scenario in which employees attempt to game the system by focusing effort on areas in which it is easiest to attain good performance, and neglecting those areas that might be harder to perform well in, but might be more valuable to the organization. Secondly, a PM must be precise; that is, a results measure must be able to be reported as a specific numerical value, not just as a range of values. Third, a PM must be objective, or free from bias. The
  • 19. 18 more discretion the evaluated employees have in how their performance is being measured, the less objective a PM is. Fourth, a PM should be timely to effectively elicit the desired results. That is, actual performance should not be separated either from the measurement of the results or from the provision of rewards by very much time. And fifth, a PM should be understandable. In other words, employees must know what they are being held accountable for, and employees must understand what they can do to influence the measure. Once PMs are set to fulfill the above criteria, an incentive system of rewards and sanctions must be set, such that employees are motivated to improve on the dimensions in which they are measured. In order for incentives to evoke the desired behavioral response among employees, they must offer something of value in exchange for good performance in the case of rewards, or provide some kind of meaningful penalty to employees in the case of sanctions. Incentives must also be large enough such that they outweigh the effort costs employees may endure pursuing the desired performance outcome. Finally, incentives must also be administered in a timely manner so that employees are able to sufficiently link their effort and actions to consequences (Merchant and Van der Stede, 2011). Though theory speaks to the potential improvements that could be borne of performance measurement, a rich body of literature addresses their potential negative consequences of, specifically the potential gaming responses to incentive systems. Linking rewards and sanctions to performance could distort incentives and elicit unintended dysfunctional responses that do not align with desired outcomes. Because rewards are allocated based on measured performance, PMs pose agents with a direct trade-off between: (1) performing as well as possible on the measures and (2) engaging in welfare-improving activities that may divert them from maximizing their performance against PMs. When given explicit incentives, agents could respond by changing their behavior. Agents could respond by “gaming” the incentive system, acting strategically to raise their performance on a measure, and therefore the sum of their potential reward, without necessarily giving thought to furthering the goals of the organization for which they work (Dixit, 2002). Gaming responses could take different forms, from cream skimming to strategic timing. Cream skimming is the practice of providing a service to only those customers that are more profitable to serve, because they are high-value, low-cost, or both. Strategic timing is the practice of manipulating how and when performance is reported to maximize awards, for example, by transferring performance in excess of the target level to the next reporting quarter. This is considered to be a gaming response if it leads to a costly misallocation of resources (Courty and Marschke, 2004). It is often said of performance management systems that, “what gets measured gets done.” It follows that, much as management control systems can be powerful tools to direct agent behavior toward desirable goals, they can just as easily lead agents to act in perverse, welfare- decreasing ways.
  • 20. 19 3.2 Performance Measurement in the Public Sector Performance measurement and incentives have long had a presence in the private sector, where financial stakeholders have strong, direct interests in monitoring the performance of their corporate agents. These management tools came into use in the public sector with a wave of other private sector practices in the late 1980s, collectively known as “New Public Management,” the whole of which was intended to “reinvent government” and make the public sector more competitive, results-oriented, and mission-driven (Osborne and Gaebler, 1992). The demand for performance measurement continued into the 1990s, a decade in which the ruling theme, at all levels of government, was a call for the documentation of performance and explicit outcomes of government action (Radin, 2000). Despite their roots in the private sector, performance measurement and incentive systems operate much differently in the public sector. Public sector organizations have features distinct from those of private sector organizations; government bureaucrats do not cater to shareholders, but rather to politicians, financers, professional organizations, and users (Dixit, 2002). Additionally, civil servants in public sector organizations may have different sources of motivation than what is normally assumed of agents in the incentive literature, and may therefore not respond to potential rewards and sanctions in the same way private sector employees might (Le Grand, 1997). Principal-agent theory assumes rational, self-interested agents acting in ways that maximize their payoffs (Eisenhardt, 1989). In this framework, the principal uses PMs to align the agent’s interests with his or her own interests. The standard framework is rendered more complex in the public sector setting, where agents are likely to work in situations with multiple principals and multiple tasks. Public agents likely have different tasks assigned by different principals who are interested in pursuing different objectives (Propper and Wilson, 2003).Different principals having different objectives may lead to defined tasks that are difficult to pursue simultaneously, if not completely contradictory. Having multiple principals with varying degrees of power and areas of emphasis creates a complicated system for the agents, one in which the agents are directed from a number of sources, making them accountable to different authorities with varying emphases on outcomes. This diffusion of direction complicates management. Even if tasks were aligned, the mere multiplicity of having more than one task and principal makes having high-powered incentives like direct financial rewards for performance less useful in the public than it is in the private sector (Dixit, 2002). As such, PMSs may not be able to elicit the same level of incentivizing power in the public as in the private sector. It is also the case that public sector employees do not only face different institutional features, but also have different sources of motivation from private sector employees, following the intuition that public employees comprise a group of self-selected and mission- driven individuals. Ashraf, et al. (2014) find evidence of the relative strength of non- monetary rewards in motivating agents hired to perform pro-social tasks: tasks that create value not just for the principal, but also for society at large. Hairdressers tasked with selling condoms to their patrons exerted more effort and generated higher sales when awarded stars
  • 21. 20 to put on a publicly visible thermometer chart displaying progress than when offered financial margins, small or large (10% and 90% commission on the retail price, respectively) on their sales. Their findings suggest that PMSs designed for the public sector may want to envision different types of rewards and sanctions for their incentive systems, taking into account the potential differences in motivation and values. 3.2.1 The Effectiveness of Performance Measures in the Public Sector Performance measurement and incentives have been used in a variety of public service areas. To help assess the performance of these tools in the public sector, we refer to empirical studies of past U.S. job training programs. Heckman et al. (2002) find that PMs may not sufficiently link to the long-term goals of the Department of Labor. That is, job-training centers were not adding value in their pursuit of performing well on PMs in the short-term. Courty and Marschke (1997; 2004) also find evidence of gaming responses to performance targets, and of a disparate impact on long-term organizational goals. Barnow (2000) also questions the effectiveness of using PMs to achieve organizational goals in the public sector. He notes that sanctions for failure to meet JTPA performance targets were not always imposed, with governors instead opting to modify the PMs in response to poor performance. He concludes that the rankings on JTPA measures do not correlate with rankings on program impact and organizational goal achievement. There is empirical evidence supporting the idea that stronger performance incentives may be more encouraging of gaming responses. Heinrich (1999) explored one particular job-training center in Illinois that used performance-based contracts to choose its service providers. These providers responded to the measures by cream skimming and providing those services that were less incentive, and therefore, more low-cost. Heckman, et al. (1997), after studying what is argued to be a more representative job-training center in Corpus Christi, find evidence to the contrary – that case workers were more interested in equity, that is, that they were more interested in serving those that needed help the most. And although the evidence on cream skimming is neither consistent nor strong, qualitative analyses of the system, including interviews of local service providers, suggest its existence (Barnow and Smith, 2004). 3.3 Performance Measurement at the U.S. Department of Labor In the context of the PMS used to evaluate job training programs provided by the U.S. Department of Labor, there are two main reasons for performance measurement: (1) to monitor and (2) to improve performance (Barnow, 2000). The Department established a pioneer performance measurement system under the Job Training Partnership Act (JTPA) in 1982. The Act, which established federal assistance programs providing job training and job placement assistance to low income and low skilled workers, sought to bring accountability to its highly decentralized programs through the use of a performance measurement system. The performance measurement system devised in the JTPA was unique in that it: (1) directed performance measures on outcomes (such as the number of trainees placed in jobs, their mean income, etc.) rather than on inputs and outputs (such as number of people trained) (Heinrich, 2004), (2) it linked PMs across federal, state and local levels of government, and
  • 22. 21 (3) it provided performance-based financial incentives for state governments and local program administrators based (Heinrich, 2002). Literature assessing the JTPA’s performance measurement system has found that conflicting goals within JTPA itself may have limited the effectiveness of the system. Although one of JTPA’s explicit goals was to focus on those most in need of services, another goal of government is to deliver services effectively, maximizing total net benefits from the program. Because the PMs created under the JTPA such as mean earnings and job placement rate, captured performance according to the latter, these measures may have incentivized local program administrators engage in cream skimming (Heckman et al., 2002; Heinrich, 2002; Courty and Marschke, 2011). JTPA’s successor, the Workforce Investment Act (WIA) of 1998, sought to address this problem by expanding the measures used to capture performance. The Act added several credential rating measures to gauge trainee skills, education and training gains that result from WIA programs. New measures of “customer satisfaction” (customers being both trainees and employers) were also added. While these additional measures provided a more nuanced view of performance, they also added to local data collection burden. The impact of this additional burden was compounded by new requirements for state performance data management systems imposed by WIA. Studies show that this additional burden has been problematic, requiring some states to develop or redesign their data management software, leading to substantial delays and data lags (GAO, 2002). Inadequate direction from the Department of Labor on how to collect and record performance data under WIA has also led to inconsistencies within the data, limiting the extent to which data from different program administrators can be compared (GAO, 2013). Under WIA, the targets are arrived at through negotiation between the Department and the states. The negotiation process is supposed to reflect economic, geographic, and demographic conditions of the state. This is distinct from the JTPA process in which the targets were arrived at using a regression-adjusted procedure to incorporate economic and demographic characteristics. Without a uniform negotiating procedure, performance targets depend on how the states negotiate with DOL. Heinrich (2007) finds no consideration of these factors, and in particular, education and race. She also finds a negative relation between performance bonus size and performance. From 2009, the federal targets were adjusted for demographic and economic conditions (DOLETA, 2009; see also Bartik et al., 2009). The greater concern is that incentives are associated with the performance targets. Beyond data issues, some have questioned whether the short-term PMs employed by the WIA, and in PMs widely, are adequately linked to long-term program impacts (Heckman et al., 2011). Indeed studies suggest that the relationship between WIA PMs and long-term program outcomes are weak at best and negatively associated at worst (Ibid). Without short- term measures that are sufficiently associated with long-term outcomes, DOL runs the risk of incentivizing the delivery of programs and services that “hit the target but miss the point” of the overall program (Ibid).
  • 23. 22 4. Theoretical Model Experiences and findings in the literature provide a useful frame for our evaluation of WIA performance measurement and incentives. Much of the literature speaks of performance measurement systems as an accountability mechanism in a principal-agent relationship, making useful propositions and predictions about agent behavior and related subsequent outcomes. In this section, we adopt a framework rooted in principal-agent theory to illuminate the dynamics of the relationships between the different actors that comprise the Public Workforce System. In doing this, we systematically evaluate WIA’s PMs, incentives, and performance data and assess their usefulness in bringing about DOL’s desired outcomes. 4.1 Principal-Agent Framework In a principal-agent relationship, the principal hires an agent to act on her behalf, or make decisions that impact the payoffs of the principal. A principal-agent problem exists when: (1) an agent has incentives to act in ways that do not align with the principal’s interests, and (2) there is an asymmetry of information whereby the principal does not know the true interests of the agent and cannot directly monitor her agent’s behavior to ensure that the agent is acting in the principal’s best interests. Where agent interests differ from those of the principal, moral hazard may occur and the agent may pursue his or her own interests at the principal’s expense. PMs and their resulting data can address this risk of moral hazard by measuring performance outcomes – thereby reducing the asymmetry of information. An incentive system that provides rewards or sanctions based on measured performance can align agent incentives with principal’s desired outcome. In this chapter, we identify how principal-agent problems may arise in the PWS and assess how successful performance measurement and incentive controls under WIA can be expected to address these problems. We begin by identifying the principals and agents in the PWS and their preferences, then turn to assessing whether WIA performance measurement adequately reduce asymmetries of information between DOL and its various agencies and whether the incentive system successfully aligns agents’ incentives to the Department’s interests. 4.2 Principals and Agents in PWS and their Interests Our analysis recognizes principals to be those who supply funding and design PMs for their agents, with the ultimate aim of having the agents act toward the achievement of the principal’s objectives. In this particular context, we have multiple principals (who occupy different roles in a hierarchy of principals) and one agent. We classify principals as either primary or secondary based on their specific functions in the Public Workforce System. The Department of Labor is our principal of interest, as it supplies the financial resources to the WIA Adult Title IB Program, sets the overall program objectives, and designs the PMs to deliver the program’s intended benefits. Secondary principals also play a role in our analysis. Secondary principals may, in practice, have very similar roles to the primary principal, but are not providing funding in this model. In this group are state and local workforce investment boards (LWIBs) that design targets and programs tailored to local needs, interacting directly
  • 24. 23 with the agents that are tasked to implement the programs. Secondary principals are however distinct in that they must respond and report to the primary principal, who allot them their budget appropriations. In setting state targets, the Department negotiates with the state, which, in turn, negotiates with LWIBs in setting local targets. LWIBs then direct the program and service delivery functions of the American Job Centers in their jurisdiction. In this sense, states and LWIBs can act both as principals and as agents. Importantly, both primary and secondary principals rely on their agents, the local implementing authorities, to make decisions on their behalf and carry out the tasks necessary to achieve the Department’s ultimate goals. While the different levels of actors working to deliver the WIA Adult Title IB Adult Program may have similar interests, Figure 4.1 illustrates how the multi-tier system creates several points at which interests may diverge. It may be reasonable to assume that principals and agents at all levels are in some way interested in the effectiveness, efficiency and equity goals held by DOL, however actors at other levels must balance these interests with other, more local concerns. Where these interests conflict with the primary principal’s goals, problems may arise. Figure 4.1– The Principals and Agents of the WIA Title IB Adult Program and their Respective Interests 4.3 Reducing Asymmetries of Information In the context of the Public Workforce System, asymmetries of information emerge because of the fact that the Department of Labor, the primary principal, cannot directly observe the actions and effort of the agents. In the introduction to this report, we identified the key objectives that the Department of Labor seeks to achieve through WIA: effectiveness, efficiency, and equity (outlined in Figure 4.1). If performance measurement under WIA
  • 25. 24 functions well, it should substantially reduce the information gap between DOL and its agents. Analyzing two main WIA reporting requirements for agents, the WIA Standardized Record Data (WIASRD) and annual WIA state reports, we find that the WIA measurement system does collect information that can inform DOL of agents’ contributions to its objectives, potentially addressing this aspect of the principal agent problem. Table 4.1– DOL Objectives 4.3.1 WIASRD Data By requiring agents to submit WIASRD data, DOL solicits many useful pieces of information from agents regarding effectiveness and equity objectives. States are responsible for submitting quarterly data on three groups of information: (1) individuals, (2) activities and services, and (3) program outcomes. The first section reports data on the individuals participating in these programs, including such things as gender, ethnicity, disability status, and other demographic characteristics. The second reports information relating to the WIA programs and includes information on the type and nature of training and services received. The third section includes information on the achievements of participants after exit from WIA, reporting their employment and salaries for several quarters after exiting a WIA program. Together this data allows the Department to observe demographic composition of individuals trained, the kinds of trainings participants receive, and their employment post- training. 4.3.2 Annual Report Requirements The WIA Annual Report narrative is another means to closing potential information gaps on all three of DOL’s objectives. States are required to submit reports each year and state WIA funding is contingent their submission. Annual Report narratives require, among other things, performance data, information on the cost of the workforce investment activities relative to their effect on the performance of participants, and a report of state activities directed toward hard-to-serve populations. DOL also requires that states report on local economic conditions that might affect performance, and a listing of the approved state exemptions from federal WIA requirements with information on how the exemptions have allowed for improved performance outcomes (DOLETA, 2012b). Thus, WIASRD and annual state report requirements appear to solicit information needed to provide DOL insight into agent actions along its objectives, potentially reducing the information asymmetry problem. However, the usefulness of the information DOL collects is largely dependent on how the data is used and the reliability of that data and information. We return to these issues in greater detail in Sections 5 and 6 of this report.
  • 26. 25 4.4 Aligning Agent Incentives Having established that the current WIA reporting requirements may sufficiently reduce the asymmetries of information regarding agent contributions to DOL objectives, we analyze the extent to which the performance feedback mechanism, that is the sanctions and rewards provided by DOL, are successful in aligning agent interests with those of the principal. We find that the current measurement and incentive structure creates a commitment problem for agents in the PWS in pursuing all three broad categories of DOL objectives. Incentive structures in place may therefore not be effective in eliciting the desired behavior among agents. 4.4.1 Commitment Problems Although DOL’s key interests (summarized in Table 4.1) span a range of objectives, the PMs and incentivizing structure focus exclusively on effectiveness goals. DOL collects large quantities of data and information, as discussed in the previous section, but agent performance is measured exclusively along three key indicators: the rate at which program participants enter employment post-program, the rate of employment retention and participants’ average post-program earnings, as seen in Table 4.2. These measures are enforced through a system of financial incentives. States that exceed their annual targets along these measures are eligible to apply for additional funds in the form of grants, while underperforming states are subject to sanctions. These sanctions and rewards function as a commitment device to ensure that agents stay committed to improving performance on DOL’s performance measures. Table 4.2– WIA Adult Program Common Measures Source: DOL, 2006 The current performance measurement system under WIA seems to lack consideration of how program goals interact with each other in practice. Agents seeking to achieve equity goals by focusing services on disadvantaged groups may fail to achieve efficiency objectives, as the services and trainings required to assist those with substantial barriers to employment may require more resources. These agents may also underperform on effectiveness objectives as well, compared to other agents focusing training on those with fewer barriers to employment, who are more likely to succeed. Other agents focusing their efforts solely on achieving effectiveness measures may do so by cream skimming, at the cost of equity, or reducing the number of total participants served so that each participant receives a greater proportion of the agent’s resources, coming at the cost of efficiency. The fact that the WIA
  • 27. 26 incentive structure focuses exclusively on effectiveness measures is likely to substantially influence the objectives agents pursue. Although it may be the case that agents within the PWS share the principal’s efficiency and equity goals, agents at the state, LWIB, and service delivery level may have little motivation to devote effort and resources toward achieving them, because these goals remain unmeasured and without incentives to encourage their pursuit. This is especially true where efficiency or equity objectives may come at the expense of effectiveness. Thus, as seen in Figure 4.2, there is a move away from pursuing efficiency and equity objectives towards pursuing incentivized effectiveness objectives only. Figure 4.2– Focusing Agent Efforts on Effectiveness This situation creates a commitment problem for the agent within the PWS. A commitment problem is defined as situation in which an actor would like to commit to a certain course of action, but is unable to do so because he can maximize his payoff by following a different course of action. Because PMs and the WIA incentive system are only linked to effectiveness goals, agents within the PWS cannot credibly commit to the pursuit of efficiency or equity goals. As long as efficiency and equity are not incorporated into the performance feedback mechanism, these objectives lack a commitment device. Incorporating PMs that address efficiency and equity as a formal part of the incentive system may address this problem. However, this will only be effective if the incentive system provides agents with credible threats and/or promises to ensure that agents act in alignment with the principal’s interests. 4.4.2 Evaluating the WIA Incentive Scheme Incentive theory predicts that performance measures, when linked to rewards and sanctions, align the interests of agents with those of their principals. Agents will do what the PMs tell them to do, as long as the incentive system is designed well. As explained in our literature review section, and according to Merchant and Van der Stede (2011), for an incentive system to be effective, we must ensure that (1) agents value what is provided through the incentive system, (2) incentives are considered large and visible enough to the agents, (3) agents know what actions are required of them, and these actions are achievable, and that (4) incentives are awarded in a timely manner. WIA incentives take the form of rewards or sanctions, based
  • 28. 27 on whether the agent fails, meets or exceeds the performance target (As explained in Appendix B). While Section 4.4.1 has addressed the issue of whether required actions are known we explore if the remaining criteria are in place to provide effective incentives to the agents. Firstly, the reward grants under the current incentive scheme must be reinvested into the program (DOLETA, 2014c). Therefore, it is difficult to conclude that agents value this incentive, at least highly, as the reward will only allow the agent to improve and do better on already established measures (Ibid), and thus not pursue any autonomous interest. If the agent is already struggling to perform well enough to exceed the target, and thus be eligible for a reward, she is unlikely to be motivated by doing “more of the same”. Sanctions, on the other hand, take the form of a reduction in the budget when the agent fails to reach the target (DOLETA, 2007). It is obvious that such a reduction in resources will be costly to the agent, as it may lead to local reorganization, and may potentially require downsizing of staff. Thus, avoiding sanctions are certainly of value to agents, and thus should motivate sufficient performance to be avoided. On an individual level, as rewards do not directly translate into individual agent’s financial benefit, we do not expect employees to respond actively to these current rewards. Although individual caseworkers within the PWS cannot internalize the rewards of the incentive system, we do expect that they are highly motivated to perform well enough not to loose their jobs. Thus, for both Job Centers and individual staff, we expect that the incentive scheme is effective in promoting performance against current PMs beyond the minimal accepted standard, but not above and beyond the performance target to initiate rewards. Secondly, as currently implemented, WIA award grants range from $750,000 to $3,000,000, depending on the amount of available appropriated funds (DOLETA, 2007), which may fail to provide sufficiently large rewards. Often, the rewards are relatively similar, irrespective of states’ size, with eligible states’ awards ranging only from $665,342 to $768,000 in 2011 (DOLETA, 2013b). Given the big variation between states in terms of the program size and population, the size of rewards may therefore not be equally attractive to all states. Larger or more resource-rich states with larger budgets are less incentivized by rewards from the relatively non-proportional incentive scheme. Moreover, rewards under the WIA incentive scheme are small compared to external funding sources, where qualifying for the external funding scheme is not explicitly linked to achieving the WIA targets. As an example, the Ready to Work Partnership offers grants totaling $169,771,960 (DOLETA, 2014a). Other sources of funding also exist, such as the Workforce Innovation Fund and the Pay for Success Grants (DOLETA, 2014d), decreasing the relative size of WIA rewards funds, which is at around $10,000,000 annually (DOLETA, 2012a; 2013b). Thus, the existence of such external resources could potentially attenuate the motivational power of WIA rewards. This implies that agents may not adequately be incentivized through rewards to exceed the WIA performance targets. Sanctions are imposed based on whether states meet the target, and are calculated as a 1-4% budget decrease in a following program year (DOLETA, 2007). As previously stated, such a reduction in resources is likely to have negative consequences that agents highly desire to
  • 29. 28 avoid. Once again, we expect that sanctions are large enough, and sufficiently highly valued, to incentivize agents to meet their targets. However, the credibility of the threat of financial sanctions is called into question by the low frequency with which such sanctions are actually imposed on underperforming states. In recent years, where states have been eligible for sanctions, the Department has instead opted for a guided path to improvement (L. Murren 2015, Pers. comm., 5 February). While the decision not to impose sanctions on underperforming states may indeed have been appropriate in context, the fact remains that the inconsistent application of incentives may lessen their effect as commitment devices. DOL may not strictly enforce performance on targets due to considerable uncertainty in setting performance targets, or due to not wanting states to fall into a kind of poverty trap (low performance begetting even more low performance due to sanctions reducing already- limited budgets), and so it follows that sanctions may become a less credible threat to the agents. Lastly, the timeliness of incentives is important to establish a clear link between the agent’s efforts and the reward or sanction that follows. As discussed, there are severe data lags in agents’ reporting to the principal through the performance measurement system. Such data lags contribute to severe lags in the calculation of rewards and sanctions as well. The deadline for rewards based on performance for PY11 was June 30, 2013 (DOLETA, 2013b). Thus, rewards and sanctions for any program year are not implemented until two years later. This creates a disconnect between performance and incentives. If an agent fails to meet the target in year one, and then improves significantly and meets the target in the next year, she may still receive the sanction in year three based on her performance in year one. This will make it challenging for the agent to continue her improvement and exceed the target in year four. This lag between performance and financial consequences raises question over the ability of WIA incentives to achieve the behavioral responses intended. Key findings: 1. Current performance management tools may address the asymmetry of information between DOL and its agents through the solicitation of extensive record data and state reports, so long as that information is timely and accurate. 2. The performance measurement and incentive system’s exclusive focus on effectiveness objectives may preclude states from pursuing other DOL objectives. Because targets and incentives are only attached to effectiveness measures, agents are not able to credibly commit to pursuing all three objectives. This is particularly true where resources are limited and pursuing all three objectives would come at the expense of performance on effectiveness. 3. The incentive structure, as is, may not provide agents with sufficient motivation to align their actions to DOL objectives. Rewards lack proportionality and timeliness and sanctions are not consistently enforced, reducing their credibility as threats.
  • 30. 29 5. Quantitative Analysis of Public Use Performance Data The conclusions from our theoretical analysis raised questions about whether the performance measures, data, and incentives in the WIA system could be expected to elicit agent performance in line with the Department’s desired outcomes. We posited that, with PMs and incentives as they are, agents are unlikely to independently pursue any objective aside from effectiveness, which is uniquely accounted for by the common measures under WIA. We also found reason to question whether the incentive system can reliably motivate agents to pursue even those effectiveness goals that are explicitly measured. In this section, we empirically investigate how the PMs under WIA measure and affect agent performance on each of the three Department objectives of effectiveness, efficiency, and equity. 5.1 Analytical Approach and Choice of States To evaluate how well WIA PMs and data inform the Department on state performance toward DOL objectives, and how well they work to drive state performance toward those goals, we use two main data sources: public use Quarter 3, Program Year 2013 Workforce Investment Act Standardized Record Data (WIASRD) for containing individual-level WIA participant data, and a panel of aggregated state-level performance data from Years 2008- 2002, which we construct using national summaries of annual performance data, state- negotiated levels of performance, and annual state funding allocations for the WIA Adult Program. Using our panel of aggregated state data, we are able to assess trends over time, while the individual-level WIASRD data allows us to investigate the drivers of performance in a given year. To this end, we narrow our analytical focus to the Program Year (PY) 2012 exiters only, as these records are the most complete in our dataset, yielding 1.6 million observations (for a full description of the data consulted in this analysis, please see Appendix C). The scope of our empirical investigation is narrowed to a sample of ten states that broadly represent four geographical regions in the U.S.: Oregon, Washington, and Idaho (West); Minnesota and Indiana (Midwest); Texas and Alabama (South); New York, Pennsylvania, and Connecticut (Northeast). The states were selected with the aim of representing the geographic, demographic, economic, and political diversity of the United States while allowing for comparisons within and across regions. Consistent with our previous sections, we limit the scope of our analysis to the Adult Program. Doing so presents two main advantages. First, the Adult Program is WIA’s largest program, both in terms of funding and participants served. This provides us with a large sample size to work with. Second, the Adult Program, designed specifically to prepare low-skilled jobseekers for employment, serves a range of individuals broader and more diverse than that of any other WIA program. A study of the Adult Program, therefore, allows us to evaluate how well WIA PMs inform the
  • 31. 30 Department of performance in a context where the stakes are high and the conditions are challenging1 . 5.2 Effectiveness To evaluate state performance on the effectiveness objective, we use the data that states submitted to the Department of Labor to report on its indicators. The Adult Program has three common measures: (1) entered employment, (2) employment retention, and (3) average earnings. Performance on any one of these measures is defined as a success rate: the number of successful interventions divided by the number of total interventions (DOL, 2006)2 . In this section, we seek to investigate how well these PMs inform the Department on state performance toward effectiveness objectives, and how well they work to drive state performance toward such goals. In our theoretical section, we emphasized that the usefulness of PMs depends on whether they measure the right things. Therefore, an aim for this section is to understand the degree to which WIASRD data might meaningfully inform the Department of how successful states are in promoting the three DOL objectives. We first test, broadly, whether the performance targets set forth by the Department actually promote better performance. To this end, we take data from the Adult Program in PY12 (Figure 5.1) to explore the relationship between the targets and the performance on entered employment. We observe a strong correlation between the set Adult Entered Employment Rate (AEER) targets and actual performance for the states. The states selected for our sample, shown in red, closely fit this national trend line. This correlative relationship could suggest that states do well when they aim to hit the targets set by DOL. However, we cannot conclude that the targets are causal drivers of performance. It is possible that the opposite could be true, that it is actually performance that drives the target levels. Though the second scenario may seem implausible at first, it is certainly a possibility, due to the fact that the Department has target-setting practices that incorporate past performance. As the final target level reflects observed performance from the Program Year from two years past, adjusted for changes in the characteristics of the individuals served, it could plausibly be the case that targets are simply good predictors of performance. 1 As not all states completed their reporting prior to the publication of the PY13, Q3 WIASRD data, some states are absent from the dataset, most notably of which are California and New Jersey, who were both late to report their2 While these measures can speak to the WIA program outcomes, on their own, they cannot inform DOL of program impact. In order to conduct an analysis of program impact, we would need to compare program outcomes for WIA participants with outcomes of a control group who did not receive WIA trainings and services. Because WIASRD contains outcome data only for program participants, such impact analyses are not possible. Researchers continue to grapple with reliable methods of assessing WIA program impact (for a comprehensive review, see Orr, et al., 2011). Quasi-experimental methods using supplementary administrative datasets often struggle to establish a valid counterfactual, while experimental approaches pose both ethical, cost and feasibility concerns. This body of research will continue to develop insight on the impact of WIA.
  • 32. 31 Figure 5.1 – Relationship Between Targets and Performance, PY12 Source: DOL PY12 WIA State Annual Reports and Summaries Figures 5.2 to 5.4 show the development of targets, performance, and distance from target for our ten selected states and the national average for thePY08 to PY12 time period. The three charts allow us to evaluate the relationship between the set targets and observed performance, both in absolute terms, and in terms of reaching the set target. In a well- functioning performance management system, performance targets should drive performance, so long as proper incentives are in place. However, when looking at target and performance trends, it is difficult to identify targets as drivers of performance over time. Looking at the national average of how close actual performance is to performance targets over time in Figure 5.2, we observe that after an increase in distance from PY08 to PY09, likely due to the financial crises and subsequent recession, the gap between the AEER and the targets has slowly closed over time, with the national average performance in PY12 actually exceeding the average target. Initially, this seems like a positive indication of state performance improvements, but the interpretation of this trend is complicated by the fact that we observe a steady decline in average performance targets (see Figure 5.3). Interestingly, while targets have steadily decreased, Figure 5.4 reveals that state performance has actually increased since 2009.
  • 33. 32 A closer look the states in our sample reveal a similar puzzle. States in Figure 5.2 appear, in general, to be trending toward decreasing their target shortfalls, with some regularly exceeding their targets. While this may be driven in part by the steady or slightly upwards trend in actual performance visible in Figure 5.4, its unclear if states’ close proximity to their targets in PY12 is driven more by improved performance or by the consistent decreases in performance targets visible in Figure 5.3. Figure 5.2 – AEER Distance from Target, PY08-PY12 Source: DOLWIA State Annual Reports and Summaries
  • 34. 33 Figure 5.3 – AAER Negotiated Targets, PY08-PY12 Source: DOL WIA State Annual Reports and Summaries Figure 5.4 – AEER Actual Performance, PY08-PY12 Source: DOL WIA State Annual Reports and Summaries
  • 35. 34 5.2.1 Improving Effectiveness Measures While the WIA common measures certainly speak to several of DOL’s effectiveness objectives, as discussed above, they paint a somewhat confusing picture of state performance. It is difficult to compare performance across states, given that targets vary substantially across states, based on such things as past performance and economic and demographic characteristics. This issue can be addressed, in part, by comparing the gaps between states’ actual and targeted performance, but it is complicated by large fluctuations in targets over time, making it difficult to determine how much of the gap is explained by movement in performance or movement in the target. Fluctuating performance targets are not necessarily problematic in and of themselves. In fact, they are to be expected, given that state performance targets are calculated via a regression- based adjustment system that accounts for changes in the state’s economic climate and WIA participant demographic characteristics in the previous year and then negotiated with state officials. Given DOL’s interest in gaining a clear understanding of performance within states, while still accommodating economic and demographic challenges states may face, it may be more sensible for DOL to develop a system of weighting performance rather than weighting targets. This approach has two important advantages. First, it would allow the Department to create measures of performance that are comparable across states, eliminating the ambiguity described in the section above. Second, it would allow the Department to account for relevant state characteristics in the year measured, rather than those in previous years, as is the case with the current target setting methodology. 5.2.2 Additional Measures of Effectiveness In addition to weighting performance on common measures rather than performance targets, DOL may gain better insight into other effectiveness objectives by creating measures with which to gauge the relevance of WIA trainings and services provided within states. In attempting to learn more about whether the training provided is related to the category of employment received, we created a job match variable in the WIASRD dataset, where the occupational category of training matched the occupational category of job held in the first quarter after exit.3 However, we find only 23,987 matches out of the almost 1.6 million individuals in the dataset. In other words, we only have this data for about 3% of registered individuals. A 2014 GAO report discusses this issue, and provides recommendations on establishing a performance measure for training-related employment. Such a job match measure could be a good way to incentivize providing the right skills, rather than just providing any skills towards WIA participants. Taking the idea of providing the right skills even further, DOL may do well to measure the extent to which WIA trainings and services provided to participants are in high demand in local labor markets. This is a defined objective in DOL’s Strategic Report (2013), but currently no measure of states’ success on this objective exists. There is a requirement to 3 Using calculated fields 3020 and 3021 to create a dummy variable equal to 1 where values of the two fields match.
  • 36. 35 have a majority of business representation in both state and local WIBs (WIA, 1998, Sec 117), which is supposed to improve the promotion of in-demand labor skills, but we cannot say whether this is the case. As different skills are likely to be in demand in different LWIAs, we would like to test whether the right skills are promoted in the implementation of the WIA locally. This could be done if states were required to record the category of the high-demand sectors in different LWIAs, and the category of training provided to individuals. Based on this, a performance indicator that measures the extent to which states are able to provide in- demand skills could be implemented, by matching the category of training to the category of in-demand sectors. This would give us a success rate measure of the percentage of individuals provided with training that provided skills in high demand. The target for such a measure would need to be related to local conditions, and set in collaboration with local WIA operators. Both a job match and a demand-driven skills measure would add significantly to current effectiveness measures. A job match measure could, in theory, be implemented with data that is already collected in WIASRD today, but too much data is currently misreported. It seems too difficult for states and caseworkers to record the 8 digit O*Net code4 for occupation correctly. This is reflected in the many comments in the WIASRD data booklet, where a number of data errors for occupational and training category are mentioned (SPRA, 2014). It is therefore recommended that DOL develop a set of occupational categories that are easier to understand for those who are asked to record the data. Such categories should be broad enough to allow easy recording, but narrow enough to separate different industries. Key findings: 4. While the WIA common measures certainly speak to several of DOL’s effectiveness objectives, as discussed above, they paint an ambiguous picture of state performance. It is difficult to compare performance across states, given that state targets vary substantially, based on such things as past performance and economic and demographic characteristics. This issue can be addressed, in part, by comparing the gaps between states’ actual and targeted performance, but this effort is complicated by large fluctuations in targets over time, making it difficult to determine how much of the gap is explained by movement in performance or movement in the target. 5. Currently, DOL does not collect sufficient data to gauge the relevance of WIA trainings and services provided within states. Although DOL requires states and Local Workforce Investment Boards (LWIBs) to identify the skills and trainings in high demand by local employers, no measure exists that would allow DOL to determine the proportion of trainings provided that fulfill this criteria, nor does one exist that would determine how frequently the jobs that participants enter into post- training are related to the WIA training they received. 4 The O*Net code is a standardized numbering system for specific professions.
  • 37. 36 5.3 Efficiency As mentioned in the Department of Labor’s strategic report, states are facing increasing budgetary constraints (DOL, 2013). Efficiency is, therefore, an increasingly important objective, as outlined in our introduction. To test whether agents are successful in ensuring efficiency in their delivery of WIA services, we supplement WIASRD data with state level WIA Adult Program funding allocation data in order to make an assessment of how efficiently states deploy their resources to achieve desired outcomes. We calculate a cost per job (CPJ) measure for this analysis by dividing federal WIA allotments for the different states by the number of WIA Adult Program exiters who entered employment in the different states for each program year. This very simple measure, which is constructed using readily available data in the WIA national summaries of annual performance data and the WIA adult activities program dollar tables, allow us to assess states’ efficiency and compare state rankings in terms of CPJ and AEER in order to gain an understanding of how a potential tradeoff between different objectives plays out in practice. Table 5.1– Cost Per Job PY08-PY12, by State Source: DOL PY12 WIA State Annual Reports and Summaries & DOL WIA State Statutory Formula Funding 5.3.1 Discussion of Results Table 5.1 and Figure 5.5 show the CPJ for the ten states in our selection over the PY08- PY12 time period. At first glance, we see that there are vast differences between the calculated scores for different states, ranging from $111 USD for Oregon in PY12 to $21,026 USD for Connecticut in 2009. This indicates that states are performing incredibly different in ensuring efficient delivery of their WIA services. There is also large in-state CPJ variation over time of up to $9,323 USD. Table 5.2 adds to this story by showing the differences in overall state ranking when applying CPJ as a measure rather than the traditional success rate measures for PY12. Idaho, which in PY12 performed better than any other state in terms of
  • 38. 37 AEER, ranks only 39th when the CPJ measure is applied. On the other hand, one of the worst states in terms of AEER performance, Oregon, ranks third in terms of CPJ. The rankings are nearly inverses of one another, although this is less the case when applying this comparison to all fifty states and the District of Columbia (Appendix D). Thus, we are left with the impression that the effectiveness scores calculated as part of state reports and national summaries fail to address efficiency concerns due to the lack of a relevant performance measure. Figure 5.5 – Cost per Job, PY08-PY12 Source: DOL PY12 WIA State Annual Reports and Summaries & DOL WIA State Statutory Formula Funding Table 5.2 – Cost per Job and AEER Rankings PY12, by State Source: DOL PY12 WIA State Annual Reports and Summaries & DOL PY12 WIA State Statutory Formula Funding
  • 39. 38 To better understand what the CPJ measure tells us, we look more closely at the relationship between CPJ and AEER. Figure 5.6 and 5.7 show CPJ and AEER performance calculated for the WIA Adult Program in PY12, and provides a picture of divergent performance in terms of CPJ and AEER. Many of the high CPJ states colored red in Figure 5.6 also have high AEER, as shown in Figure 5.7. This indicates that states struggle to simultaneously perform well at both effectiveness and efficiency. Thus, we suspect that there is a trade-off between performance in terms of CPJ and AEER. Figure 5.6– State Variation in Cost Per Job, PY12 Legend in $USD (nominal). Source: DOL PY12 WIA State Annual Reports and Summaries & DOL PY12 WIA State Statutory Formula Funding Figure 5.7– State Variation in Adult Entered Employment Rate, PY12 Legend in fraction of participants who entered employment. Source: DOL PY12 WIA State Annual Reports and Summaries Figure 5.8 shows a scatterplot of CPJ and AEER for all fifty states and the District of Columbia in PY12, and suggests a positive relationship between the two variables. Such a $47 - $4069 $4069 - $8120 $8120 - $10110 $10110 - $24644 53.7% - 67.1% 67.1% - 73.79% 73.79% - 79.2% 79.2% - 86.68%
  • 40. 39 positive relationship indicates a possible trade-off between CPJ and AEER, or, put differently, between effectiveness and efficiency. It is plausible that states can improve their AEER by spending more money per participant, thus reducing efficiency. The positive relationship identified is in accordance with our initial interpretation of the changes in state rankings when CPJ is applied as a measure. However, such a relationship does not account for other factors that may be related to both the effectiveness and efficiency of their service delivery. Figure 5.8 – Relationship Between Cost per Job and AEER, PY12 Source: DOL PY13, Q3 WIASRD, DOL PY12 WIA State Annual Reports and Summaries & DOL PY12 WIA State Statutory Formula Funding One such factor is the extent to which states’ WIA participants are co-enrolled in other job- training programs and services, such as Wagner-Peyser programs, Trade Adjustment Assistance Community College and Career Training Grants Program, and other federal and local initiatives, which may substantially impact cost efficiency measures across states. The red squares in Figure 5.8 show states with higher than 90% co-enrollment rates in the Wagner-Peyser program, based WIASRD data. Comparing the incidence of co-enrollment in states and their CPJ estimates, we observe that several states with high rates of co-enrollment also tend to be states with low CPJ. Table 5.3 further demonstrates the substantial variation in Wagner-Peyser co-enrollment alone across states in our sample. However, we also find that high co-enrollment is not exclusively associated with low CPJ, with Washington and Texas as examples of states with high co-enrollment and higher levels of CPJ. Still, there is a clear overrepresentation of states with high co-enrollment rates in the lowest CPJ segment,