21512Change and CelebrateInsanity is doing the sam.docx
1. 215
12
Change and Celebrate
Insanity is doing the same thing over and over again
expecting different results.
—ALBERT EINSTEIN
The fifth element on the path to easy and effective impact and
excellence seems obvious, but it is frequently overlooked and
often
avoided in social-sector organizations. Nonprofit and
government
leaders are often hesitant to implement changes to programs and
operations that disrupt the status quo, even when things are not
going well for the organization.
Typically, leaders avoid change because they are afraid of
failure or an adverse public reaction. The popular expression,
“The devil you know is better than the devil you don’t know,”
accurately describes why many leaders shy away from change
and
often go out of their way to avoid it. However, this reluctance
to
implement needed change is not found among social-sector
lead-
ers who embrace a high-performance measurement culture.
Why is this the case? Measuring and evaluating results allows
these leaders to make decisions with confidence and leads them
4. Embracing Change Confidently
Many organizations make the mistake of rushing into change
or avoiding risk altogether. Innovation is often squelched in the
social sector because stakeholders perceive the consequences of
failure to be too great. They fear the wrong decisions will result
in bad press, loss of elections, reduction in funding, or negative
consequences to clients. This fear perpetuates the status quo and
results in stalled advances throughout the social sector. On the
other end of the spectrum, some social sector leaders leap to
deci-
sions based on consensus or a best-guess approach without
taking
the time to evaluate the data and often implement change too
quickly and haphazardly.
When social-sector organizations base decisions on outcomes
from measures aligned with organizational mission and objec-
tives, they naturally embrace change in a more confident
manner.
Access to the right data equips organizations that might
otherwise
be hampered by endless second-guessing or a shot-in-the-dark
approach to move forward confidently. Solid data-collection
efforts
provide a natural safety net that empowers leaders to make
innova-
tive and sometimes even radical performance improvements that
might otherwise seem foolhardy or ill-advised.
When interpreted and applied correctly, data functions as a
figurative parachute, allowing the organization to effect change
at a brisk but controlled pace. Organizations naturally become
more innovative and steer clear of the adverse effects experi-
enced when decisions are either avoided outright or made
swiftly
without the backing of such data. Confident that embracing
7. se
rv
ed
.
Change and Celebrate 217
If the change is successful, the leader can reinforce this suc-
cess and implement it on a larger scale. If the change does not
achieve the desired results, then the leader has newfound
knowl-
edge about what does not work. In addition, leaders have access
to data that will allow the leadership team to institute a plan for
improvement, eventually leading to stronger, more effective,
and
more efficient operations.
The most damaging thing an organizational leader can do
is to design a well-crafted performance and outcomes system
and communicate the results but fail to use the data to take
corrective action. Such a practice leads the organization into a
cycle that replicates what Albert Einstein defined as insanity:
“doing the same thing over and over again and expecting differ-
ent results.”
Here are three reasons social-sector organizations should not
invest time and resources in data collection if they are not com-
mitted to developing a plan for making meaningful changes
using
the data they collect:
1. Inaction is a waste of time and resources. There is no point
8. to capturing, collecting, and analyzing data if it will sit unused
in
a database or be housed in a shelved report.
2. Inaction breeds staff resentment and distrust. Not using
data that staff, clients, and stakeholders invest time in
gathering,
evaluating, and entering discourages participation in future
data-
collection compliance efforts. When data is not used, leaders
send
a message to staff that data has little or no value or impact on
the
organization’s growth and success.
3. Inaction produces the same results the next time. This
practice will keep an organization playing small and not
reaching
its highest possible level. Inaction hurts clients. Often an
organiza-
tion’s staff and program directors have at their fingertips
valuable
information with the potential to greatly improve programs and
client outcomes. Not implementing change based on this infor-
mation keeps clients from receiving the best possible experience
from the organization.
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
Nonprofit and Government Organizations</i>, John Wiley
& Sons, Incorporated, 2014. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/nlu/detail.action?docID=17
76333.
Created from nlu on 2019-08-26 17:41:47.
C
10. . A
ll
rig
ht
s
re
se
rv
ed
.
218 Impact & Excellence
The Power of Using Data: A Case Study
An Ohio child welfare collaborative instituted performance
measures to
ensure that cases of abuse, neglect, and dependency were being
processed
in a timely manner and leading to optimal outcomes for
families. By state
law, child abuse, neglect, and dependency cases must be
disposed of
within ninety days of a finding of guilt in an adjudication
hearing. If a dispo-
sitional hearing does not occur within that time frame, the
court, on its own
motion or on the motion of a party, may dismiss the case
without prejudice
11. (Ohio Revised Code, 2003).
This ninety-day rule led to many county court cases being
dismissed
prior to disposition and then refiled. Refiling resets the clock
and causes
a delay, which is generally not in the best interest of the child,
as this further
delays permanency and stability for the child. In addition,
frequent use of the
ninety-day rule violates the spirit of the state statute.
In this case, the group discovered that 23 percent of abuse,
neglect, and
dependency cases were being dismissed and refiled, following
this ninety-
day rule. This practice was having a negative impact on children
by keeping
them in temporary placement longer than necessary. In addition,
it increased
costs to the courts and clogged up dockets, leading to
mismanagement of
staff time and, often, to an increase in child welfare
expenditures, which were
necessary to cover placement for those children.
A statewide evaluation conducted by the Ohio Supreme Court
exam-
ined the increase in dismissed and refiled cases. A collaborative
committee
was convened to conduct an internal evaluation. This committee
identified
several reasons for the high incidence of dismissed cases.
First, the committee found that the magistrates were not holding
attorneys
12. accountable for excessive requests for dismissals. The
committee discovered
other factors related to scheduling difficulties, for both
magistrates and other
affected parties. Based on these findings, the collaborative
committee identi-
fied several key solutions. They agreed to implement the
suggested changes
and convene monthly to review the data on dismissals. Over the
course of
six months, the dismissal and refiling rate was reduced from 23
percent to
10 percent, a significant improvement.
The information the group used for evaluation and decision
mak-
ing included data they were already required to submit to the
Supreme
Court regularly. They had been submitting this data for years,
yet prior to
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
Nonprofit and Government Organizations</i>, John Wiley
& Sons, Incorporated, 2014. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/nlu/detail.action?docID=17
76333.
Created from nlu on 2019-08-26 17:41:47.
C
op
yr
ig
14. ht
s
re
se
rv
ed
.
Change and Celebrate 219
A Closer Look. This case study demonstrates the power of data-
driven decisions, consistent communication, and the ability to
effect change based on data. Unfortunately, a majority of
govern-
ment and nonprofit organizations are similar to the child
welfare
collaborative. They wait to take action on collected data and
suffer
the consequences of doing so. Only a handful of today’s social-
sector organizations have a proactive plan for taking action on
the
data they collect. Fewer still are committed to using collected
data
to implement lasting change.
I would estimate about 30 percent of organizations with well-
aligned performance measures and measurement systems rou-
tinely and systematically make improvements based on data.
The
remaining 70 percent may intend to make changes based on
their
15. measures, but in most organizations, six months later no signifi-
cant changes have been made. Furthermore, these social-sector
organizations are often experiencing the same results.
Typically,
the organization has access to an abundance of relevant and use-
ful data. Information is not the problem. True success is
prevented
because the organization fails to act or lacks a sustained
commit-
ment to implementing change.
Best Practices for Using Data to Implement Change
Successful organizations embrace a clear commitment to make
data-driven decisions and implement change when the data sug-
gests it is needed. Without a structured plan for using data, it
is natural to put implementation on the back burner or to
delay change management activities. Next, we will consider best
convening the collaborative committee, no action was being
taken on this
data. Unfortunately, it took a statewide evaluation conducted by
a funder to
spur the court to action on its data. Once the court convened the
committee
and took a strong, data-driven approach, using the available
data to effect
positive change, they saw swift results. The committee found a
dramatic
decrease in the percentage of dismissals and refiled cases
system wide.
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
Nonprofit and Government Organizations</i>, John Wiley
& Sons, Incorporated, 2014. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/nlu/detail.action?docID=17
17. or
at
ed
. A
ll
rig
ht
s
re
se
rv
ed
.
220 Impact & Excellence
practices for using data to implement change in the social-sector
organization.
Tie Measures to Performance Goals
The more measures are tied to the day-to-day work of the orga-
nization and its staff, the more likely employees will be
naturally
motivated to make adjustments needed for the organization to
achieve its desired outcomes. Both individual and group perfor-
mance goals can be set based on results. When performance and
outcome measures are relevant and aligned with the mission,
18. this highly successful practice engages the staff and
substantially
improves the results realized in mission-driven organizations.
Convene a Data Review Committee
One barrier to making changes with data lies in securing buy-in
from key stakeholders. One successful practice is assigning a
stand-
ing committee primary responsibility for a monthly or quarterly
review of progress on all performance and outcomes measures.
Composed of seven to nine members who represent a diverse
group
of decision makers, this committee is tasked with making
decisions
based on data and evaluating progress toward the organization’s
desired outcomes. If the organization staff has fewer than seven
people, the entire staff should meet as a data review committee.
It is important to select a mix of committee member types,
including idea generators, strong strategic thinkers, those capa-
ble of systems thinking, and those who take a process-oriented
approach. Such diversity maximizes the group’s productivity
and
effectiveness. The outcome of this committee’s work will be not
only a plan for change but also the clear communication of the
importance of embracing change. Such a group will naturally be
motivated to implement a plan for positive progress.
Establish a Champion
On the committee, a champion should be assigned to each mea-
sure. One individual can be the permanent keeper of all activ-
ity surrounding a particular measure, or one champion can be
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
20. co
rp
or
at
ed
. A
ll
rig
ht
s
re
se
rv
ed
.
Change and Celebrate 221
assigned for all measures. The designated champion assumes
responsibility for ensuring quality data-gathering compliance
and reviews the assigned measure on a regular basis. A
champion
also ensures the full implementation of any action items aimed
to
increase a particular measure and evaluates and communicates
the degree of success achieved.
21. Champions should be given the authority to give direction,
coach others in the organization, and offer advice for that
particu-
lar measure. Champions should be members of the data review
committee, rather than other individuals within the
organization.
If there is more than one champion for the set of measures, the
data review committee chair should provide oversight for imple-
mentation and changes instituted for all the measures.
Meet Regularly
The data review committee should adopt a schedule or regu-
lar committee meetings in conjunction with the organization’s
annual planning process, placing meeting dates on the calendar.
It is best if members know that they will be meeting on a
consis-
tent schedule—the third Wednesday of every month, for
example.
If this meeting must be rescheduled, set a new date at the time
of
cancellation. A consistent meeting schedule underscores the
value
and importance of the committee’s assigned responsibility both
to
those who serve on it and to others in the organization.
Use a Consistent Meeting Format
At meetings of the data review committee, a standardized
format
should be followed. Each meeting should include an
examination
of data trends for the past twelve to eighteen months. The
commit-
24. rv
ed
.
Table 12.1. Data Review Committee Meeting Template
Perfor-
mance or
Outcome
Measure
12 Month
Rolling
Average
Current
Quarter’s
Average
Perfor-
mance
Goal
Under /
Over Goal
Past
Changes
that
Impacted
this Goal
27. s
re
se
rv
ed
.
Change and Celebrate 223
whether and to what degree desired outcomes are occurring in
the organization. If the data suggest that changes are required,
the
committee should identify what additional information is
needed,
outline proposed changes, and design a plan for implementation.
Lastly, the group should discuss how the desired results will be
communicated and celebrated.
Committee Success: A Case Study
A method similar to that described in this chapter to implement
change was
employed to help a committee considering strategies to improve
outcomes
for delinquent juveniles. The committee focused its discussion
on best prac-
tices for alternatives to juvenile detention. Typically, the
juvenile courts that
implemented such best-practice strategies reduced costs and
also saw a
significant reduction of long-lasting negative consequences for
28. public safety
and youth development.
The committee selected specific practices designed to reduce
unneces-
sary delays in case processing, which would result in shorter
lengths of stay
in detention for juveniles. The outcome for the court would be
the efficient
use of nonsecure alternatives and the reductions in failure-to-
appear and
rearrest rates.
This review committee used data to examine how effectively
policy
changes enabled the court to reach its desired outcomes. A
detailed analysis
had previously been conducted for the court and had provided
the commit-
tee with information that showed that case process delays in the
juvenile
court system were related to increases in continuances and
increased docket
volume of school truancy filings.
To assess favorable progress toward their goals, the committee
met to
examine quarterly trends in the number of days juveniles were
held in deten-
tion, the average days it took juvenile court cases to be
processed, the num-
ber of formal school truancy filings in the system, and the
length of stay
for youth being held on transfers to adult court (called
“bindovers,” as the
court binds over the youth for trial or further inquiry). Table
31. Average days
held In juvenile
detention
11.27 12.08 9.72 9.95 8.25 10.25 <7
Average number of
days for a case to
be processed
168.64 144.19 120.31 92.24 n/a 111.98 <90
Average number of
continuances per
case
0.90 0.87 0.80 0.66 0.32 0.71 0
Number of chronic
school truancy
cases filed
480 2 178 472 690 364 n/a
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
Nonprofit and Government Organizations</i>, John Wiley
& Sons, Incorporated, 2014. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/nlu/detail.action?docID=17
76333.
Created from nlu on 2019-08-26 17:41:47.
C
op
yr
33. rig
ht
s
re
se
rv
ed
.
Change and Celebrate 225
A Closer Look. The committee began its work with the care-
ful examination of trends during the second quarter of 2011.
They discovered that, during this period, youth were being held
in detention an average of eleven days. During this three-month
period, 480 truancies had been filed. The court took an average
of 168 days, almost double the desired 90 days, to process
cases. In
addition, cases were likely to experience at least one
continuance.
The committee used this data to implement specific strategies,
beginning with immediate shelter care hearings for youth being
held unnecessarily in detention. In addition, they established
new committees assigned to study school truancy issues and
other
issues related to case processing.
Most of these strategies were implemented in the first and sec-
ond quarters of 2012. At the end of second quarter, the group
reconvened to examine the progress and impacts of their efforts.
34. They discovered that both the emergency shelter care practice
and a focus on the reduction of detention lengths were having a
positive impact. The average number of days youth were held in
detention decreased from 11.27 to 8.5 days.
The committee also realized slight reductions in the average
numbers of days to process cases, down from 168 to 92 during
the
second quarter. In addition, the data showed a significant
reduction
in continuances, from .90 per case to .32 per case. Reviewing
the
data was encouraging to the committee and suggested that their
efforts and discussions in pursuit of these outcomes were
working.
One measure that did not see improvements was the increase
in chronic school truancies. There was a significant increase in
filings compared to the previous year’s filings over the same
quarter.
In fact, filings had increased from 480 to 690. Based on the data
review, the committee decided to ramp up its previous
implemen-
tation efforts while also exploring ways to work with schools
and
other community groups to reduce the number of school filings.
Implement Change
High-performance measurement cultures require leaders with
the ability to successfully enable constructive change. Social-
sector
leaders achieve desired success by focusing on the problem at
hand and also by anticipating and responding to stakeholders’
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
36. co
rp
or
at
ed
. A
ll
rig
ht
s
re
se
rv
ed
.
226 Impact & Excellence
fear of the unknown and their natural resistance to change.
When
measures are aligned with the true mission of the organization,
it is much easier to overcome such resistance and to
significantly
reduce fear, if not eliminate it. This is especially true when the
leader has done a good job of cultivating and communicating
the vision of the organization.
37. Without the necessary data and information, effective change
becomes almost impossible. Change is scary, and stakeholders
and
staff need to be both engaged and confident in the
organization’s
leadership during times of change. Leaders that come to staff
with
data-informed decisions can obtain buy-in for proposed
strategies
and break through natural barriers to change.
The Measurement Culture Study revealed that 77 percent of
organizations with a high-performance measurement culture
were
successful at using data to create organizational change. In
contrast,
none of the organizations with a moderate- or low-measurement
culture achieved change. In other words, fear and resistance to
change are seldom barriers to success for organizations with a
high-
performance measurement culture. Measures and data provide
the
solid foundation for organizations that successfully implement
posi-
tive change.
Celebrate Success
Measurement results naturally lead to change in high-perfor-
mance measurement cultures, and it is important for organiza-
tions to celebrate such change, whether great or small. Even
when positive change occurs, it can prove stressful.
Recognizing
the progress the organization has accomplished motivates staff
to
move forward and gives them the confidence needed to continue
building on their success into the future.
40. .
Change and Celebrate 227
and rewarding success is the foundation of growth and
continuous
improvement.
Organizations with effective recognition programs have strong
leadership participation and supportive cultures. These
organiza-
tions increase job satisfaction and productivity by communicat-
ing and celebrating their staff’s hard work. They provide
linkages
between employee efforts and the impact and outcomes the
organi-
zation contributes to the community.
Communicating and sharing outcome measures provides a
way to recognize, educate, and bond with stakeholders. When
measures are used as positive internal management tools, they
allow employees to know precisely how their contribution is
mak-
ing a difference and contributing to the vital mission of the
orga-
nization. Such a practice may also inspire employees to see how
they can increase the impact they are already making.
Studies indicate that social-sector employees highly value the
celebration of success. Celebrations are the orchestrated expe-
riences of linking relationships and value to the contributions
made; the award is the icing on the cake (Saunderson, 2004).
Rewarding employees for reaching outcome goals provides this
linkage of relationships and value.
41. Effective recognition systems do not have to include large
monetary rewards. As highlighted in Chapter Two, the
administra-
tor of the Dallas County Tax Office created an effective reward
sys-
tem with a series of reliable performance standards that
measured
important elements of the organization’s mission. Instituting
such
rewards allowed the agency to operate with less staff, control
bud-
get growth, improve staff morale, and increase customer
satisfac-
tion, all while experiencing unprecedented demand for service.
Potlucks, pizza parties, an afternoon off, a day’s vacation, and
an organization-wide picnic are a few of the ways nonprofit
and government organizations might choose to celebrate the
impact that staff members are making for the organization and
its
mission. When presenting an award, high-performing organiza-
tions place an emphasis on how specifically the individual or
team
contributed to the increased outcome measures. Celebrating suc-
cess leads to increased success in the future.
The reason that organizations with high-performance mea-
surement cultures are significantly more successful than those
with
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
Nonprofit and Government Organizations</i>, John Wiley
& Sons, Incorporated, 2014. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/nlu/detail.action?docID=17
43. or
at
ed
. A
ll
rig
ht
s
re
se
rv
ed
.
228 Impact & Excellence
moderate- and low-measurement cultures boils down to how
these
different organizations use their data. The organizations that
will
continue to thrive, have the greatest impact, and make our world
a better place are those that reward the dedicated work of staff
and stakeholders. Such recognition motivates people to continue
to give all they can and to strive for the next level of success. In
addition, these organizations are not satisfied with the
expectation
of mediocre results. Rather, they consistently monitor outcomes
and results and make the necessary adjustments and changes to
44. achieve the highest success possible.
Next Steps
By following the Five C’s of Easy and Effective Impact and
Excellence
shared in this book, leaders prepare nonprofit and government
organizations for greater impact. In the next chapter, we will
look
at ways in which social-sector organizations can leverage a
high-
performance measurement culture and thrive in today’s
challenging
environment.
Impact & Excellence
Chapter Twelve Discussion Questions
1. Develop a list of at least five things your organization is
doing
well. How could staff celebrate those successes?
2. How will you regularly celebrate success?
3. How does your organization currently use data for program
improvement? What data does the organization already have
that could be used for program improvement? Develop a plan
to start reviewing and taking meaningful action on existing
data.
Chaney, Jones, Sheri. <i>Impact and Excellence : Data-Driven
Strategies for Aligning Mission, Culture and Performance in
Nonprofit and Government Organizations</i>, John Wiley
& Sons, Incorporated, 2014. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/nlu/detail.action?docID=17
76333.
Created from nlu on 2019-08-26 17:41:47.
46. ed
. A
ll
rig
ht
s
re
se
rv
ed
.
57Performance Funding for Higher Education
Obstacles to the Eff ectiveness
of Performance Funding
A SENTIMENT THAT IS QUITE COMMON among
institutional offi cials is that performance funding has had little
real impact on insti-
tutional performance and that it is largely a symbolic practice.
Th ree studies
on Tennessee and one on North Carolina cited several
administrators and
faculty members who argued that performance funding has
simply been a rote
activity, with actors only going through the motions of
47. collecting data and
submitting reports. For example, the faculty senate chairman at
one North
Carolina community college dismissed performance funding as
“mere paper
shuffl ing” (Harbour & Nagy, 2005, pp. 457–458). A
department chair at the
University of Tennessee at Knoxville argued: “Th e impact that
this has had on
us in the department has really been to simply add another
administrative
task. I don’t think . . . that it has changed the way a single
faculty member
teaches, the kind of materials that a single faculty member
presents. It has had
no impact on our curriculum” (as quoted in Hall, 2000, pp. 78–
79).
Th is pervasive undercurrent of skepticism about performance
funding
refl ects the many obstacles that it encounters when
implemented. Among the
obstacles that crop up in the research literature on the impacts
of performance
funding are the inappropriateness of many performance
measures employed;
instability in funding levels, indicators, and measures; the brief
duration of
many performance funding programs; funding levels that are too
low; short-
falls in regular state funding for higher education; uneven
knowledge and
expertise about performance funding within institutions;
inequalities in
48. 58
institutional capacity; and resistance and “game-playing” by
institutions.
Th ese obstacles are discussed in this chapter.
Inappropriate Performance Funding Measures
Many studies discuss the reservations held by administrators
and faculty
about how well diff erent indicators and their measures capture
the real per-
formance of their institutions. In fact, a faculty leader at one
North Carolina
community college dismissed performance indicators in North
Carolina as
“pretend” measures (Harbour & Nagy, 2005, p. 458).
Learning Gains
In eight studies of Tennessee, two of South Carolina, and one of
Florida,
institutional offi cials expressed skepticism about the validity
of the learning
assessments being used as measures of institutional
performance. In
Tennessee, respondents were particularly skeptical that the
state-mandated
assessment of general education and the major fi eld exit
standards adequately
captured what faculty aimed to teach (Banta & Fisher, 1984, p.
34; Freeman,
2000, p. 98; Hall, 2000, pp. 95–96; Shaw, 2000, pp. 88–89;
Tanner, 2005,
p. 85; Williams, 2005, p. 93). For example, arguing in terms
echoed by sev-
49. eral other interviewees, an administrator at the University of
Tennessee
asserted:
We know general education is important because we all do it.
But we don’t know what it is and we don’t know how to fi gure
out whether this school or that school is doing a good job with
it. We don’t know that any better now than we did twenty years
ago. And so, giving people money or withholding money on
the basis of general education is a very slippery proposition.
(Quoted in Hall, 2000, p. 96)
Retention and Graduation Rates
Three studies each on Florida and Tennessee and one each on
Ohio and
Washington raised concerns regarding how retention and
graduation rates are
58
59Performance Funding for Higher Education
measured. Community college offi cials and faculty in Florida
asked why col-
leges should be penalized if vocational students leave college
without a degree
for well-paying jobs during times of economic growth. Th e
community col-
lege offi cials and faculty argued that it is enough if students
have been able to
reach their goals or at least have attained a useful level of
education (Bell,
2005, p. 109; Gray et al., 2001, p. 32; Morris, 2002, p. 131; see
50. also
Dougherty & Hong, 2006a, pp. 70–71). Furthermore, an Ohio
study noted
that graduation rates usually do not take transfer into account;
thus, com-
munity colleges with students who successfully transfer to a 4-
year college
without having fi rst received an associate degree usually
cannot count such
students as graduates. In fact, transfer students are often
mistakenly treated as
if they were dropouts (O’Neal, 2007, pp. 135–136).22 Finally,
some studies
raised the issue that graduation rates do not take into account
diff erences
between institutions in the academic preparation or degree
ambitions of stu-
dents. For example, community colleges tend to have higher
percentages of
students with social and academic disadvantages that make it
hard to get a
degree, even if that is the students’ intent (Dougherty & Hong,
2006a,
pp. 70–71).
A problem has also been noted with the use of numbers
graduating rather
than rates of graduation. A college could increase its numbers
graduating,
even if the graduation rate is declining, if it is experiencing
sizable enrollment
increases (Jenkins et al., 2009; Shulock & Jenkins, 2011, p. 10).
Job Placement Rates
Concerns over job placement indicators cropped up in two
51. studies each on
Florida, Tennessee, and Washington. A major criticism by
college offi cials and
faculty was that job placement rates are dependent on the state
of the local
economy, which varies over time and by region, in ways that are
not under
the control of the colleges (Banta et al., 1996, p. 32; Bell, 2005,
p. 137;
Dougherty & Hong, 2006a, p. 71; Nisson, 2003, p. 113). An offi
cial at a
Washington community college noted:
I think the measures they chose were so ridiculous it became
obvious. I mean one of them was a wage-level, a certain dollar
60
amount per hour that every college had to average in their pro-
grams [that graduates attained]. Well, [my college] is always at
the top because we have a lot of high-wage programs but
Yakima
and Walla Walla [rural community colleges], those places, no,
they’re never going to get there probably. And it doesn’t make
any sense. (Quoted in Dougherty & Hong, 2006a, p. 71)
In Florida, an additional concern was that job placement
indicators hurt
institutions whose graduates obtain jobs out of state, because
those jobs are
not counted by state datasets (Gray et al., 2001, p. 40).
Institutional Diff erences
52. Th ree studies on Washington, two on South Carolina, and
single studies of
Florida, Missouri, Ohio, Pennsylvania, and Tennessee noted the
concern
of college offi cials about how state performance funding
programs failed to
take into account diff erences among institutions in their
mission and in their
capacity to meet performance demands. With regard to mission,
tensions
arose in Tennessee over perceptions that the performance
funding program
insuffi ciently acknowledged that institutions have diff erent
missions. For
example, a university administrator argued that the research
mission of that
institution was not refl ected in its performance funding results
(Hall, 2000,
p. 95). Meanwhile, Washington community colleges with a
greater focus on
academic transfer argued that they are at a disadvantage because
the highest
potential for amassing performance points under the new
Student
Achievement Initiative occurs in adult basic education and
developmental
education (Jenkins et al., 2009, p. 37).
With regard to capacity to meet performance demands,
Washington com-
munity colleges serving greater numbers of at-risk students
perceived them-
selves to be at a disadvantage in amassing performance points
because such
students tend to need “costly wrap-around services” in order to
succeed
53. (Jenkins et al., 2009, p. 37).23 Similarly, in South Carolina, the
state
Legislative Audit Council found that:
The standardization of measures for schools in each sector
raises opposition by institutional representatives. Th e measures
61Performance Funding for Higher Education
do not fully take into account the diff erences that exist among
institutions within a sector. For example, a majority of the
same measures have been applied to MUSC [Medical Univer-
sity of South Carolina] and Clemson when they have radically
different student populations. (South Carolina Legislative
Audit Council, 2001, p. 23)
Similar sentiments about performance funding programs’
disregard of
diff erences among institutions in their student bodies and
therefore per-
formance capacities arose as well in Missouri and Ohio
(Naughton, 2004,
pp. 89–90; O’Neal, 2007, pp. 130, 137; also see Dougherty &
Hong,
2006a, pp. 71, 73).
Instability in Performance Funding Levels,
Indicators, and Measures
When budgets and indicators are unstable, higher education
leaders fi nd it
hard to decide where to focus the eff orts of their institutions
and they are
afraid to take chances. A survey in the late 1990s of community
college
54. and 4-year college offi cials in fi ve states with performance
funding found
that 40.1% of respondents rated budget instability as an
extensive or very
extensive problem of performance funding in their state (Burke,
2002a,
p. 77).
We found validation for this point in one study each of Florida,
Ohio,
South Carolina, and Washington. In Florida, a dean of
vocational education
at a Florida community college argued that the state should
“allow us to know
what the rules are so that we can plan appropriately. I think that
the indeci-
sion each year has really put us in a predicament that has
strapped us for
resources” (quoted in Dougherty & Hong, 2006b). As it
happens, in Florida,
funding for the state’s now suspended Performance-Based
Budgeting (PBB)
program fl uctuated over the years. It started at 2% of state
appropriations for
community college operations in fi scal year 1996–1997,
dropped below 1%
in 2001–2002, stayed at that level until 2005–2006, and then
jumped to
1.8% (Dougherty & Natow, 2010). In addition to shifts in the
amount of
funds that are involved in performance funding programs, the
particular
62
55. performance indicators used to allocate those funds can also
change. In
Florida, the PBB program experienced considerable changes in
the perform-
ance indicators used. Florida’s PBB added 10 performance
indicators and
dropped three in the 12 years between 1996–1997 and 2007–
2008, an aver-
age of one change per year (Dougherty & Natow, 2010).
Th e Brief Duration of Many PF Programs
Many performance funding programs do not last for many years,
thus under-
cutting their capacity to produce eff ects. At least half of all the
states that have
enacted performance funding programs have later discontinued
them, often
after only a few years. For example, performance funding lasted
only 2 years
in Arkansas and Washington (in the 1990s) and 4 years in
Illinois and
Minnesota (Burke, 2002a; Dougherty et al., 2012; also see Table
1). Th is
short duration makes it hard for performance funding programs
to eff ectively
stimulate the organizational changes in colleges and universities
that will pro-
duce improved student outcomes.
Several factors are involved in the early demise of many
performance
funding programs. A key factor is higher education opposition
to perform-
ance funding, stimulated by a perceived lack of adequate state
consultation
56. with higher education institutions, the use of performance
indicators that
higher education institutions did not fi nd valid, and a
perception of erosion
of campus autonomy and of high implementation costs to
institutions. Th is
opposition hardened during the recessions of the early and late
2000s. As
state appropriations for higher education faced cuts or failed to
keep pace
with enrollments, higher education institutions moved to protect
their core
state funding and turned against performance funding. Another
important
cause of program demise was the decision in Florida (the
Workforce
Development Education Fund) and Washington (the 1997–1999
program)
to fi nance performance funding by holding back a portion of
the state appro-
priation to higher education institutions and requiring the
institutions to earn
it back through improved performance. Th is program design
feature aroused
great animosity on the part of higher education institutions
(Dougherty et al.,
2012).24
63Performance Funding for Higher Education
Inadequate State Funding of
Performance Funding
One obstacle cited in four studies of performance funding in
Florida and one
57. in Washington was the simple fact that not enough money is
involved
(Dougherty & Hong, 2006a; Jenkins et al., 2012). In many
states, the pro-
portion of state funding of higher education tied to performance
outcomes
has been 1% or less (Dougherty et al., 2013a). Even if the
amount is higher,
as it was in Florida for a while, the impact of performance
funding will be
undercut if it does not keep pace with rising performance.
Funding under the
Workforce Development Education Fund in Florida did not rise
as fast as
improved performance. As a result, the bounty for each graduate
dropped
over time (Bell, 2005, pp. 156–157; Dougherty & Hong, 2006a,
p. 72; Gray
et al., 2001, p. 29; Poisel, 1998, p. 94; Wright et al., 2002, p.
164).
Shortfalls in Regular State Funding
Th is issue was raised in seven studies on Tennessee, four on
Washington, and
two on South Carolina. In Tennessee, the funds allocated under
the regular
enrollment-based formula had not kept pace with enrollment
growth
(Freeman, 2000, pp. 88–89; Hall, 2000, pp. 93–94; Latimer,
2001,
pp. 95–98; Lorber, 2001, p. 85; Shaw, 2000, p. 97). For
example, according to
its chief fi nancial offi cer, Walters State Community College
received only 89%
of the base funding called for by the state’s regular funding
formula for the
58. 1999–2000 academic year (Shaw, 2000, p. 97). As a result of
these shortfalls,
performance funding no longer functioned as bonus funding but
instead was
used to make up the defi cit in regular state funding. A dean at
the University
of Memphis stated that performance money “gets chewed up
just trying to
keep the ship afl oat on a day to day basis” (Latimer, 2001, p.
95). Th is practice
eventually led a number of Tennessee higher education offi
cials to argue that
performance funding provided the state with an excuse to cut
the formula
funding. Meanwhile, in Washington, even as the Student
Achievement
Initiative took effect in recent years, state formula funding
dropped. As a
result, many community college presidents and senior
administrators became
64
resentful, feeling that performance funding was no longer a
bonus but rather
only a partial redress of dropping state support (Jenkins et al.,
2009,
pp. 40–41; Shulock & Jenkins, 2011, p. 12).
Uneven Knowledge about Performance
Funding Within Colleges
Th e eff ective implementation of performance funding has also
been ham-
pered by the fact that awareness of performance funding and its
59. requirements
varies greatly within institutions, with those at the top of the
hierarchy pos-
sessing greater understanding of and responsibility for the
performance fund-
ing process than middle-level administrators and faculty who
also must play
an important role in implementing performance funding. For
example, in a
survey of 2-year and 4-year college administrators in fi ve
states with perform-
ance funding, Burke (2002a, pp. 63–64) found that while 88%
of the top
administrators were “very familiar” or “familiar” with their
state’s performance
funding program, only 58% of the academic deans and 40% of
the depart-
ment chairs were familiar with it. Similar fi ndings show up in
nine studies on
Tennessee, three each on Ohio and Washington, and one on
North Carolina.
In Ohio, a survey of 224 administrators at 13 public universities
revealed
that knowledge of performance funding was stratifi ed within
institutions in a
manner similar to that described by Burke (2002a). Executive-
level adminis-
trators such as presidents and vice presidents were more
knowledgeable than
were department/unit-level administrators, with 38% of the
former but only
22% of the latter reporting that they were aware of the state’s
Success
Challenge performance funding program (Schaller, 2004, p.
151).25
60. In Washington, interviews at 17 community colleges established
that,
while the state’s Student Achievement Initiative (SAI)
performance funding
program was known and understood “fairly well” to “very well”
by presidents,
senior administrators, and institutional research staff , the same
was not true
of faculty and student support services staff . In fact, the
majority of faculty
members interviewed had only a limited understanding of the
SAI (Jenkins
et al., 2009, pp. 19–22, 33). Th e following description by a
vice president of
instruction at a Washington community college was typical:
“With our
65Performance Funding for Higher Education
faculty we’ve told them that this initiative is happening. . . .
Faculty know that
something is happening, but that is the extent of it. . . . Th e
faculty have had
it explained to them, but if you talked to them, they couldn’t
explain it back”
(as quoted in Jenkins et al., 2009, p. 20).
Similar inequality of knowledge about performance funding was
also
reported in Tennessee (Freeman, 2000, pp. 81–82; Hall, 2000,
pp. 73–74;
Latimer, 2001, pp. 72–76; Shaw, 2000, pp. 66–67, 72, 86–87).
At the
61. University of Tennessee, Knoxville, an administrator noted: “I
don’t think
most people inside the university understand [the state
performance funding
system]. I would say 95% of the faculty don’t know anything
about it”
(as quoted in Hall, 2000, p. 74).
Th is informational inequality has been attributed to a number
of diff erent
causes. First, many administrators view performance data
collection and anal-
ysis as an administrative task that faculty need not be concerned
about
(Freeman, 2000, pp. 81–84; Hall, 2000, p. 73; Harbour & Nagy,
2005,
p. 453; Jenkins et al., 2009, p. 21). For example, at Volunteer
State
Community College in Tennessee, a senior administrator noted:
“[Faculty]
don’t need to know. To me our campuses now are large enough,
and they’re
diverse enough, and they’re so specialized that people . . .
really don’t have the
time, energy, or intellect . . . for everybody to become an expert
on the aspects
of performance funding” (as quoted in Freeman, 2000, p. 82).
In Washington, administrative reluctance to widely publicize
performance
funding within their institutions was tied to uncertainty about
its longevity
and implications. College administrators were leery about
widely publicizing
the performance funding program until they got a better idea of
how it would
62. work and whether it would last. Th ey reportedly did not want
to involve
faculty in an evanescent eff ort that they might well resist
(Jenkins et al., 2009,
p. 21).
Further, lack of faculty awareness also may be tied to a faculty
perception
that performance funding is not central to the faculty role
(Jenkins et al.,
2009, p. 21; Shaw, 2000, p. 87). A faculty member at Walters
State
Community College in Tennessee noted: “As a faculty member .
. . most of
your work is wrapped up in your discipline, preparing notes for
class, and
spending time with students. Only when you as a faculty
member are forced
66
to address those issues regarding performance funding, do you
participate and
integrate them” (as quoted in Shaw, 2000, p. 87).
In Washington, a factor contributing to faculty perceptions that
perform-
ance funding is not very relevant to their jobs is the fact that
most of the
community colleges have focused their eff orts initially on
student services and
improving basic skills, which are emphases at the margins of eff
ort and atten-
tion of most college-level faculty (Jenkins et al., 2009, p. 21).
63. Another factor contributing to lack of knowledge and feeling of
responsi-
bility on the part of faculty and middle-level administrators is
the fact that
performance indicators are typically measured at the
institutional level alone
and not at the unit level as well. As a result, faculty and middle-
level admin-
istrators may not be aware of the performance of their particular
academic or
administrative units relative to other units at their college or
comparable units
at other colleges (El-Khawas, 1998, p. 325; Ewell, 1994a).26
Buttressing this lack of awareness and responsibility at the unit
level is the
fact that performance funding typically fl ows into the general
operating funds
of institutions. Allocating performance funds to the general
operating fund
makes it diffi cult for those not directly responsible for the
overall institutional
budget to perceive the connection between their actions and the
receipt of
performance funding (Freeman, 2000, p. 93; Hall, 2000, p. 92;
Lorber, 2001,
pp. 84–85). For example, a department chair at Tennessee
Technological
University argued: “[Money from performance funding] is for
the general
fund and to most faculty that’s a black hole. . . . What am I
going to get out
of this? Nothing. . . . Do I get travel? No. Do I get a new
computer? No” (as
quoted in Lorber, 2001, p. 85).
64. Lack of faculty knowledge about and perceived responsibility
for perform-
ance funding makes it hard to mobilize faculty eff orts to make
it eff ective.
College success in meeting performance demands cannot be
done only
through administrative action but must ultimately involve the
concerted
action of the faculty, which in turn requires their knowledge and
acceptance
of performance funding. Moreover, lack of in-depth
involvement by faculty
and midlevel administrators in the design and implementation of
perform-
ance funding programs raises the possibility of unintended
impacts that
administrators and others cannot anticipate.
67Performance Funding for Higher Education
Inequality of Institutional Capacity
Two studies each on Florida and Washington discussed how diff
erences in
institutional capacity serve are an obstacle to the eff ective
implementation
of performance funding (Bell, 2005, p. 135; Dougherty & Hong,
2006a,
p. 73; Jenkins et al., 2009, p. 28; see also Dowd & Tong, 2007;
Witham &
Bensimon, 2012). In Florida, a state community college official
men-
tioned: “I think some of the smaller colleges have certainly the
potential
65. for that problem. [Two colleges] come to mind, where there’s
just so few
people in some of these programs, I know this is causing them a
lot of
problems trying to keep up with things” (quoted in Dougherty &
Hong,
2006b).
Meanwhile, in Washington, an evaluation of the Student
Achievement
Initiative performance funding program for community colleges
(estab-
lished in 2007) revealed wide disparities in institutional
capacity to collect
and analyze performance data. Th e data supplied by the
Washington State
Board for Community and Technical Colleges to the community
colleges
need to be supplemented by data collected by the institutions
themselves,
but the colleges diff er widely in their capacity for data
analysis. At several
colleges, there was a shortage of institutional research (IR) staff
with the
skills and time to rigorously analyze college performance data.
And even
colleges with larger IR departments still had to collect and
analyze their
own data, and they diff ered widely in their capacity to do so
(Jenkins et al.,
2009, p. 28). An early evaluation of the Washington Student
Achievement
Initiative concluded:
Even at colleges with larger IR departments, college person-
nel suggested that the achievement point database does not
66. provide enough information to pinpoint areas of weakness,
let alone design improvement strategies or track the progress
of ongoing student retention efforts. As a result, colleges
have to use their own data to do such analyses, and there is
wide variation in the capacity of colleges to do so. (Jenkins
et al., 2009, p. 28)
68
Institutional Resistance to and Gaming
of the System
Th e obstacles to the eff ective implementation of performance
funding are
matters not just of capacity but also of will. Eight studies on
Tennessee, two
each on Florida, South Carolina, and Washington, and single
studies of
Missouri and Ohio documented ways in which institutions try to
game the
performance funding program to secure high performance scores
without
actually improving their performance. Th is gaming takes two
main forms:
setting low institutional goals that can be easily attained and
taking actions
that produce apparently desirable performance but in ways that
require mini-
mal eff ort and are not in keeping with the spirit of performance
funding.
Setting Low Goals
One form of gaming occurs in systems that allow institutions to
set their own
67. goals or targets. Institutions can set goals that are easily
achievable rather than
goals that stretch the institution. Such behavior has been
documented in
South Carolina (South Carolina Legislative Audit Council,
2001, p. 19).
Moreover, fi ve studies of Tennessee institutions reported the
same (Banta
et al., 1996, p. 35; Bogue, 2002, p. 98; Freeman, 2000, pp. 89–
90; Latimer,
2001, pp. 90–91; Williams, 2005, p. 94). For example, a high-
level adminis-
trator at a Tennessee public university stated:
If your success, your monetary success, was not connected with
that [your funding], you might sit down and make …goals that
are a little more adventuresome. Because we know that achieve-
ment is connected to the funding, we will set down and make,
I’m not saying we do it, but the temptation to do it is there—we
sit down and make goals that are a little more obtainable. . . .
Let’s say we know we’ll increase our minority enrollment by a
certain percent. We also know that corresponds to certain equiv-
alency in funding. Is it the funding that is driving the goal or
the
realistic expectation that we can reach out for that goal? Th at’s
just human nature to me. (Quoted in Latimer, 2001, p. 91)
69Performance Funding for Higher Education
Deceptive Compliance
In some instances, institutions have complied with the
requirements associ-
ated with performance funding but only minimally and
68. deceptively. Given
budget concerns and the potential for performance funds to be
directed
toward general operations, administrators have looked for ways
in which per-
formance points can be increased without substantial
expenditures, eff ort, or
even actual improvements. Th ree studies on Tennessee, two on
Florida, and
one on Ohio discussed ways in which participants tried to game
the system.
A vice president at a Tennessee university described how
programs can manip-
ulate their student assessment results by postponing fi eld
exams that are likely
to yield lower results because “you don’t want a low score to
aff ect you for fi ve
years” (Lorber, 2001, p. 72). Also, a faculty leader at one
Tennessee university
noted that departments could secure favorable external reviews
of their
departments by calling on friends to perform the external audits
(Baxter,
Brant, & Forster, 2008, p. 58). Finally, in Ohio, a number of
university
branch campuses were found to have relabeled as transfer
students rising jun-
iors who remained within the university system, in order to fulfi
ll the transfer
expectation of the Performance Challenge (Dunlop-Loach, 2000,
p. 92).
In sum, performance funding programs encounter many diff
erent obsta-
cles, and these obstacles may play an important role in
explaining why those
69. programs have not had more striking impacts on student
outcomes. But even
if these obstacles were removed, there is another problem that
needs to
be considered: the sometimes sizable unintended impacts that
performance
funding programs can create, especially if they are not carefully
designed. We
now turn to reviewing research fi ndings on this subject.
Copyright of ASHE Higher Education Report is the property of
John Wiley & Sons, Inc. and
its content may not be copied or emailed to multiple sites or
posted to a listserv without the
copyright holder's express written permission. However, users
may print, download, or email
articles for individual use.
Reflect on your initial reasons or motivations for moving
forward with your research. Inevitably, you wanted change.
(LEARNING AND DEVELOPMENT IN ORGANIZATIONS)
Change to a process, an approach, an outcome or in leadership.
As a group, discuss the following points to ponder as they apply
to your research and its potential impacts on your organization:
1. Change is possible at any and every level but why hasn’t it
already occurred?
2. Data can prove the need for change, guide or direct the type
of changes to be made and still not be enough to implement
change.