SlideShare a Scribd company logo
1 of 141
Download to read offline
Evaluation of an Innovative Leadership-Development Program
at a Private, Not-for-Profit University
by
Renee Venezia
An Applied Dissertation Submitted to the
Abraham S. Fischler School of Education
in Partial Fulfillment of the Requirements
for the Degree of Doctor of Education
Nova Southeastern University
2015
ii
Approval Page
This applied dissertation was submitted by Renee Venezia under the direction of the
persons listed below. It was submitted to the Abraham S. Fischler School of Education
and approved in partial fulfillment of the requirements for the degree of Doctor of
Education at Nova Southeastern University.
Barbara Packer-Muti, EdD Date
Committee Chair
Dian Moorhouse, EdD Date
Committee Member
Ronald J. Chenail, PhD Date
Interim Dean
iii
Statement of Original Work
I declare the following:
I have read the Code of Student Conduct and Academic Responsibility as described in the
Student Handbook of Nova Southeastern University. This applied dissertation represents
my original work, except where I have acknowledged the ideas, words, or material of
other authors.
Where another author’s ideas have been presented in this applied dissertation, I have
acknowledged the author’s ideas by citing them in the required style.
Where another author’s words have been presented in this applied dissertation, I have
acknowledged the author’s words by using appropriate quotation devices and citations in
the required style.
I have obtained permission from the author or publisher—in accordance with the required
guidelines—to include any copyrighted material (e.g., tables, figures, survey instruments,
large portions of text) in this applied dissertation manuscript.
Signature
Renee T. Venezia
Name
April 25, 2015
Date
iv
Acknowledgments
Ralph Waldo Emerson is credited with the adage that it is not about the
destination, but rather the journey. This journey has not been a solo venture. I have had
guidance and assistance every step of the way. If this endeavor could be likened to
running a marathon, then the people in my life have been part of Team Venezia.
My brother, Joseph Venezia, has been my special running partner who showed up
every day I needed him. He ran every single mile with me, pointing out alternatives in the
trail that proved to be better routes.
My dissertation chair, Dr. Barbara Packer-Muti, was my trainer, preparing and
enabling me to reach the goal. My mother, Marguerite Venezia, has been my personal
assistant, taking care of my daily needs to allow me to focus on my training. All my
family and friends were cheerleaders along the whole 26 miles; their names all belong on
this work for I could not have done it without them. They have my gratitude always.
To my father, John Venezia, who always believed I could achieve anything: This
is for you, Poppy.
v
Abstract
Evaluation of an Innovative Leadership Development Program at a Private, Not-for-
Profit University. Renee Venezia, 2015: Applied Dissertation, Nova Southeastern
University, Abraham S. Fischler School of Education. ERIC Descriptors: Leadership
Training, Program Evaluation, Management Development, Universities
This applied dissertation was designed to determine the effectiveness of employee
leadership training at a private, not-for-profit university. The goal of the study was to
provide leaders at the university with evaluative information using the Kirkpatrick 4-level
evaluation model regarding the effectiveness of a new leadership-development training
program starting at the university for 400+ supervisors and managers. Literature supports
the need for program evaluation, but employee training programs tend to be superficially
evaluated, leaving executives without sufficient data to decide if the training was
effective and if so, to what extent the organization benefits from the investment. If
structured well, this study would serve as a model for future training evaluation at this
university.
The evaluation was based on Kirkpatrick’s 4 levels of evaluation; training participants
were surveyed to determine reaction, learning, and behavior. Survey responses were
analyzed to determine Level 4, results. Participants in the study were university managers
and supervisors with 3 or more subordinates. Study results showed overall satisfaction
with training by participants, evidence of learning, and training behaviors observed on the
job by supervisors and direct reports of participants, but lack of evidence to confirm the
training meets executive stakeholder expectations.
vi
Table of Contents
Page
Chapter 1: Introduction....................................................................................................... 1
Statement of the Problem........................................................................................ 1
Program................................................................................................................... 4
Purpose of the Evaluation....................................................................................... 5
Definition of Terms................................................................................................. 6
Chapter 2: Literature Review.............................................................................................. 8
Conceptual Framework......................................................................................... 10
Synthesis of Findings............................................................................................ 13
Need for Further Research.................................................................................... 14
Leadership Programs and Assessments at Other Universities.............................. 15
Summary of the Literature.................................................................................... 32
Research Questions............................................................................................... 34
Chapter 3: Methodology ................................................................................................... 35
Program................................................................................................................. 35
Participants............................................................................................................ 37
Evaluation Model.................................................................................................. 38
Instruments............................................................................................................ 39
Procedures............................................................................................................. 40
Chapter 4: Results............................................................................................................. 44
Population and Data Collection ............................................................................ 44
Research Question 1 ............................................................................................. 46
Research Question 2 ............................................................................................. 47
Research Question 3 ............................................................................................. 56
Research Question 4 ............................................................................................. 63
Research Question 5 ............................................................................................. 75
Chapter 5: Discussion ....................................................................................................... 76
Overview of the Study .......................................................................................... 76
Findings in Relation to Research Questions ......................................................... 76
Implications........................................................................................................... 81
Conclusions........................................................................................................... 82
Limitations............................................................................................................ 83
Recommendations for Further Research............................................................... 85
References......................................................................................................................... 87
Appendices
A University Vision and Mission ...................................................................... 93
B Training Value Interview Instrument ............................................................ 95
C University 2010–2020 Business Plan Overview ........................................... 97
vii
D Leadership Training: Level 1 Reaction Survey............................................. 99
E Leadership Training: Level 2 Knowledge Test ........................................... 122
F Leadership Training: Level 3, 360-Degree Survey: Observable On-the-
Job Behavioral Changes .............................................................................. 130
Tables
1 Model of Comparison Analysis for Level 2 Pre- and Posttraining
Knowledge Testing........................................................................................ 41
2 Example of Level 2 Learning Survey Responses Indicating No Evidence
of Learning..................................................................................................... 42
3 Example of Level 2 Learning Survey Responses Indicating Evidence of
Learning......................................................................................................... 43
4 Training Intervention Session Dates by Module ........................................... 44
5 Executive Stakeholder Expectations of the Leadership-Development
Program.......................................................................................................... 47
6 Level 1 Evaluation: Satisfaction With Course Content and Importance of
Content to Supervisors, by Module ............................................................... 55
7 Project Management and Measurement Module: Pre- and Posttest
Response Frequency and Percentage............................................................. 56
8 Chi-Square Statistical Analysis of Project Management and
Measurement Pre- and Posttest Responses.................................................... 57
9 Project Management and Measurement Pre- and Posttest Parameter
Estimate and Wald Significance Test............................................................ 57
10 Performance Management Module: Pre- and Posttest Response
Frequency and Percentage ............................................................................. 58
11 Chi-Square Statistical Analysis of Performance Management Pre- and
Posttest Responses......................................................................................... 58
12 Performance Management Pre- and Posttest Parameter Estimate and
Wald Significance Test.................................................................................. 58
13 Managing Conflict and Change Module: Pre- and Posttest Response
Frequency and Percentage ............................................................................. 59
14 Chi-Square Statistical Analysis of Managing Conflict and Change Pre-
and Posttest Responses .................................................................................. 59
15 Managing Conflict and Change Pre- and Posttest Parameter Estimate
and Wald Significance Test........................................................................... 60
16 Communication in the Workplace Module: Pre- and Posttest Response
Frequency and Percentage ............................................................................. 60
17 Chi-Square Statistical Analysis of Communication in the Workplace
Pre- and Posttest Responses........................................................................... 61
18 Communication in the Workplace Pre- and Posttest Parameter Estimate
and Wald Significance Test........................................................................... 61
19 Visioning and Planning Module: Pre- and Posttest Response Frequency
and Percentage............................................................................................... 61
20 Chi-Square Statistical Analysis of Visioning and Planning Pre- and
Posttest Responses......................................................................................... 62
viii
21 Visioning and Planning Pre- and Posttest Parameter Estimate and Wald
Significance Test ........................................................................................... 62
22 Level 2 Evaluation: Pre- and Posttest Percentage of Correct Answers by
Module........................................................................................................... 63
23 Project Management Behavioral Survey Response Frequency, All
Respondents................................................................................................... 64
24 Project Management and Measurement Chi-Square Analysis: Direct
Reports Versus Supervisors........................................................................... 65
25 Project Management and Measurement Chi-Square Analysis: Training
Participants Versus Supervisors .................................................................... 65
26 Project Management and Measurement Chi-Square Analysis: Training
Participants Versus Direct Reports................................................................ 66
27 Performance Management Behavioral Survey Response Frequency, All
Respondents................................................................................................... 66
28 Performance Management and Measurement Chi-Square Analysis:
Direct Reports Versus Supervisors................................................................ 67
29 Performance Management Chi-Square Analysis: Training Participants
Versus Supervisors ........................................................................................ 67
30 Performance Management Chi-Square Analysis: Training Participants
Versus Direct Reports.................................................................................... 68
31 Managing Conflict Behavioral Survey Response Frequency, All
Respondents................................................................................................... 68
32 Managing Conflict and Change Chi-Square Analysis: Direct Reports
Versus Supervisors ........................................................................................ 69
33 Managing Conflict and Change Chi-Square Analysis: Training
Participants Versus Supervisors .................................................................... 69
34 Managing Conflict and Change Chi-Square Analysis: Training
Participants Versus Direct Reports................................................................ 70
35 Communication in the Workplace Behavioral Survey Response
Frequency, All Respondents.......................................................................... 70
36 Communication in the Workplace Chi-Square Analysis: Direct Reports
Versus Supervisors ........................................................................................ 71
37 Communication in the Workplace Chi-Square Analysis: Training
Participants Versus Supervisors .................................................................... 71
38 Communication in the Workplace Chi-Square Analysis: Training
Participants Versus Direct Reports................................................................ 72
39 Visioning and Planning Behavioral Survey Response Frequency, All
Respondents................................................................................................... 72
40 Visioning and Planning Chi-Square Analysis: Direct Reports Versus
Supervisors .................................................................................................... 73
41 Visioning and Planning Chi-Square Analysis: Training Participants
Versus Supervisors ........................................................................................ 73
42 Visioning and Planning Chi-Square Analysis: Training Participants
Versus Direct Reports.................................................................................... 74
ix
43 Percentages of Observer Groups Reporting Positive Behavioral
Observations of Training Participants Compared to Training
Participants’ Positive Reactions .................................................................... 74
44 Level 3 Evaluation: Self-Perception Responses to Specific Executive
Expectations................................................................................................... 75
Figures
1 Kirkpatrick’s Four Levels of Evaluation Grouped by Individual or
Organizational Influence ............................................................................... 38
2 Distribution of Responses to Item 2 of the Project Management and
Measurement Section of the Reaction Survey: “How Satisfied Were You
With the Content of the Project Management and Measurement
Module?” ....................................................................................................... 48
3 Distribution of Responses to Item 7 of the Project Management and
Measurement Section of the Reaction Survey: “How Satisfied Were You
That the Information Provided Is Important for Someone in a
Supervisory Role at the University?” ............................................................ 49
4 Distribution of Responses to Item 2 of the Performance Management
Section of the Reaction Survey: “How Satisfied Were You With the
Content of the Performance Management Module?” .................................... 50
5 Distribution of Responses to Item 7 of the Performance Management
Section of the Reaction Survey: “How Satisfied Were You That the
Information Provided Is Important for Someone in a Supervisory Role
at the University?”......................................................................................... 50
6 Distribution of Responses to Item 1 of the Managing Conflict and
Change Section of the Reaction Survey: “How Satisfied Were You With
the Content of the Managing Conflict and Change Module?” ...................... 51
7 Distribution of Responses to Item 7 of the Managing Conflict and
Change Section of the Reaction Survey: “How Satisfied Were You That
the Information Provided Is Important for Someone in a Supervisory
Role at the University?”................................................................................. 52
8 Distribution of Responses to Item 1 of the Communication in the
Workplace Section of the Reaction Survey: “How Satisfied Were You
With the Content of the Communication in the Workplace Module?” ......... 52
9 Distribution of Responses to Item 7 of the Communication in the
Workplace Section of the Reaction Survey: “How Satisfied Were You
That the Information Provided Is Important for Someone in a
Supervisory Role at the University?” ............................................................ 53
10 Distribution of Responses to Item 2 of the Visioning and Planning
Section of the Reaction Survey: “How Satisfied Were You With the
Content of the Visioning and Planning Module?”......................................... 54
11 Distribution of Responses to Item 7 of the Visioning and Planning
Section of the Reaction Survey: “How Satisfied Were You That the
Information Provided Is Important for Someone in a Supervisory Role
at the University?”......................................................................................... 55
1
Chapter 1: Introduction
Statement of the Problem
The topic. The study university is a private, not-for-profit institution of higher
education that launched a mandatory leadership-development training program for all
supervisors who have three or more direct reports. At the time the study commenced, the
university was in its 48th year and under new leadership, with a new vision and a 10-year
business plan (beginning in 2010) to achieve the goal of becoming recognized as a
premier institution by 2020. Essential to this effort was a presidential initiative to develop
talent from within the organization through training. The expectation of this training is to
develop future organizational leaders as well as mobilize employees in support of the
organizational vision (see Appendix A).
The research problem. As the program was new for the university,
administrators wanted to know whether the training was effective. During the pilot phase
of the program, the evaluation process assessed participant reaction and self-reporting as
to whether there was an increase in learning. To determine overall program effectiveness,
administrators required a more extensive assessment, including observations of training
participants to determine whether they were actually applying the new knowledge on the
job. In addition, administrators wanted to know whether training leadership personnel
resulted in measureable impact in support of the university’s vision, mission, values, and
goals. According to Chong (2005), “Decisions should be made based on careful
observations and a clear understanding of the relevant factors involved, as well as the
goals of the learning. As is the case with formal training programs, results should be
carefully evaluated” (p. 18). To make educated decisions about modifications or even
2
continuation of the leadership training program, administrators would require sufficient
data and feedback, which was the basis for the study.
Background and justification. Because of the current economic climate, as well
as predictions of a qualified workforce shortage in the future, organizations are investing
more in employee training (Ashford & DeRue, 2010). Establishing effective corporate
learning programs is essential to maximizing training investment (Stevens & Frazer,
2005). To determine effectiveness of the training, organizations must measure return on
investment (ROI) through evaluation and assessment (Cohen, 2005). “In today’s tough
business climate, it is imperative that learning professionals link learning initiatives to
business goals and prove their value in this new workplace” (D. Kirkpatrick, 2010, p. 16).
Deficiencies in the evidence. The existing body of literature supporting the
importance of evaluating training effectiveness is extensive, particularly using D.
Kirkpatrick’s (2006) four-level model. However, there is an absence of documented
research efforts demonstrating the application of the third and fourth levels of the model,
specifically including the use of control groups. The third and fourth levels, behavior and
results, have garnered less than 15% utilization by organizations, mostly because of the
complexity involved in using control groups (Cohen, 2005).
In a 2011 study by the American Society of Training and Development, 91% of
organizations surveyed used the Kirkpatrick four-level model, although only 35%
engaged in the fourth-level evaluation of results (Galagan, 2011). In the 1980s, a fifth
level was developed, ROI calculation, but Galagan (2011) questioned executives about
the value of such calculation. Most chief executive officers agreed that change in
behavior and improvement in performance are the leading indicators that reflect effective
3
training (Galagan, 2011). J. Kirkpatrick and Kirkpatrick (2010) stated understanding
desired business outcomes allows training to be measured against return on expectations
(ROE). ROE differs from ROI in that ROI calculations focus on the outcomes of the
training versus the expected outcomes for the business (J. Kirkpatrick & Kirkpatrick,
2010).
Audience and stakeholders. At the time the study was conducted in 2012, the
university employed over 4,000 faculty, staff, and administrators. Of this population,
approximately 650 supervised one or more employees, and 407 supervised three or more
employees. This latter population of supervisors was mandated to participate in training.
Future participants would include all supervisors (newly hired and promoted), and
participation would be mandated.
Those benefiting from this study include future training participants, as the
training may improve as each evaluation cycle reports strengths, areas of weakness, and
suggestions for modification and enhancement. Participants’ direct supervisors, those
who report to the training participants, as well as the organization as a whole would
benefit, as improved training should yield greater effectiveness in the form of increased
employee knowledge and application of knowledge to the work environment. Overall, the
university should see a benefit in terms of positive impact to the university’s vision and
goals.
Stakeholders include training participants, as well as their supervisors and
subordinates who might be indirectly affected by the participants’ possible increased
knowledge. Other stakeholders include the Board of Trustees and university executive
leadership, who required an assessment of the instructional modules, facilitators, and
4
impact of training to determine whether it is having a positive impact on the organization
and whether it should be continued or altered.
Program
Executive expectations. To determine whether training meets executive
expectations, three executives of the university—the university president, the executive
vice president and chief operating officer, and the provost and executive vice president of
academic affairs—were interviewed individually to ascertain what outcomes were
expected as a result of the training intervention (see Appendix B). The interviews yielded
three major themes: individual development and performance improvement, increased
employee engagement, and organizational agility.
The executives interviewed expected that the leadership training would affect
individual development through an increase in professional development and
performance. This was anticipated to assume the form of employees taking more
initiative, supervisors delegating more tasks, and increased networking within the various
levels of employment hierarchy. These were observable behaviors that could be identified
through surveys during the Kirkpatrick third level of evaluation, to be administered
posttraining. An improvement in performance also should have been observed and
reflected in formal, annual performance reviews by supervisors.
The interviewed executives predicted this increase in individual development
would lead to increased employee engagement. The university has participated in
biannual employee-engagement surveys, and the executives wished to see an increase in
engagement scores among those who were engaged, along with a decrease in disengaged
employees. As a possible derivative of engagement, executives were hopeful of a
5
decrease in employee turnover, as well as fewer employee-relations issues and
grievances.
The final expectation of the interviewed executives, organizational agility, was
expressed in terms of timelines. The leadership training should correlate with faster
decision making, shorter time frames for implementations and projects, and accelerated
accomplishment of the university’s 10-year business plan (Appendix C). The business
plan includes six measureable, strategic priorities for which the executives would have
liked to have seen progress postintervention that correlated to the leadership training.
Professional evaluation standards. The researcher followed the standards set by
the Joint Committee on Standards for Educational Evaluation (2011). Utility standards
are designed to ensure the evaluation is useful to stakeholders. Feasability standards
increase effectiveness and efficiency of the evaluation. Propriety relates to fairness and
legality. Accuracy standards support clear data collection and reporting. Evaluation
accountability standards relate to documentation of the evaluation process.
Purpose of the Evaluation
The purpose of the study was to evaluate the effectiveness of a new leadership
training program at a private, not-for-profit university. Effectiveness was measured by
the learning outcomes self-reported by the participants, as well as by observed participant
behavioral changes on the job as reported by the participants’ managers, supervisors, and
subordinates (here termed direct reports). Other measureable impacts were to include
reduced employee turnover, increased engagement, and fewer employee-relations issues.
Organizational impacts were to be measured in terms of shorter timelines for decisions
and projects as well as faster progression toward the university’s priorities and overall
6
vision.
Kirkpatrick’s four levels of evaluation provided the framework of data collection.
The four levels are reaction, Level 1; learning, Level 2; behavior, Level 3; and results,
Level 4 (D. Kirkpatrick & Kirkpatrick, 2006). To evaluate reaction, participants were
asked to complete a postsession survey, quantifiable by degree of satisfaction. Levels 2
and 3 were measured by pre- and postsession knowledge testing. Level 3, behavior,
included interviewing the participants and their managers to determine if learning was
indeed applied in the workplace. Results, Level 4, was to include an assessment of
workplace metrics such as engagement, turnover, and other such measurements compared
against a baseline assessment to determine whether training could be correlated to
improvement in individual and organizational performance.
Definition of Terms
Kirkpatrick’s four levels of evaluation. This program-evaluation framework is
intended to evaluate reaction, learning, behavior, and results of training and development
programs (D. Kirkpatrick & Kirkpatrick, 2007). Level 1 refers to a postintervention
reaction survey intended to measure participants’ satisfaction with the intervention. Level
2 refers to the learning or knowledge component of the training or intervention. Level 3
refers to the evaluation of behaviors affected by the intervention. Level 4 refers to the
overall results of the first three levels (D. Kirkpatrick & Kirkpatrick, 2007).
Intervention. This term refers to the training sessions and materials presented to
the participants; postintervention is the time period following the training.
Employee turnover. This term refers to the number of employees who leave an
organization (either voluntarily or involuntarily) and must be replaced, relative to the
7
number of active positions.
Employee engagement. According to Gallup (2009), “Engagement relates to the
levels of emotional attachment and productivity within each constituency and each
individual constituent” (p. 3). Employees who are engaged tend to be more productive; as
a result, often the organization for which they work is more productive (Gallup, 2009).
8
Chapter 2: Literature Review
A private, not-for-profit university headquartered in South Florida launched a new
leadership training program for executives, managers, and supervisors. At the time of this
study, the university was in its 48th year and under new presidential leadership and
vision. Training was identified as one of many methods for mobilizing the organization to
pursue the new vision.
The university documented values and goals in support of the university’s vision
and mission statements (see Appendix A). Many of the values reflect transformational
leadership theory, endeavoring to foster professional development of staff with the
express interest in growing leadership from within. According to Nikolic and Robinson
(2012), “Transformational theory focuses on inspiration and empowerment of followers”
(p. 101). Nikolic and Robinson further asserted that leadership must influence followers
if the organization is to meet its goals. Since the early 1990s, research has supported a
connection between transformational leadership behaviors and employee commitment
(Dunn, Dastoor, & Sims, 2012). Questions remain, however, whether transformational
leadership can be developed at all levels of an organization, not just at the senior level
(Northouse, 2004).
Organizations today are changing at rapid pace and coping with reduced
resources. There is an urgent need for organizations to attract and develop
transformational leadership (Warrick, 2011). Abrell, Rowold, Weibler, and
Moenninghoff (2011) claimed that transformational leadership is considered beneficial to
organizations economically, hence the increasing interest in training and development of
such leadership. The American Society for Training and Development (2009) also
9
asserted that executive development is crucial to the success of organizations. Kouzes
and Posner (2007) stated it is possible to train and develop anyone for leadership roles,
not just those individuals with a natural disposition to leadership.
Fitzpatrick, Sanders, and Worthen (2004) observed, “Organizations are more
concerned with performance and the impact of training on the organization” (p. 491) and
as a result are focusing training on organizational performance indicators. This highlights
the need for training and development programs, and the ability to evaluate these
programs is fundamental to organizational success (Warrick, 2011).
To evaluate training by observation of participants, growth or changes in
behaviors must be exhibited and recorded. Phillips (2007) asserted, “To connect to the
business impact, the behavior change must link to a clear business consequence” (p. 11).
J. Kirkpatrick and Kirkpatrick (2011) pointed out, however, that training initiatives tied
to the mission of the organization are only one of many organizational activities that
could affect results. “This makes it costly and mathematically impossible to isolate the
impact of the training program alone on the organizational results attained” (J.
Kirkpatrick & Kirkpatrick, 2011, p. 61). Easier to measure would be a defined set of
learning objectives that have been identified by content experts as part of the training
curriculum. J. Kirkpatrick and Kirkpatrick (2011) also suggested that stakeholders
identify expected outcomes of the intervention and, through the evaluation process,
uncover the degree to which expectations have been met.
This literature review was designed to inform and direct further development of
the study. Relevant areas for exploration in the literature were identified and include (a)
discussion of evaluation models, (b) criticisms of Kirkpatrick’s four-level evaluation
10
model, (c) support for the Kirkpatrick model, (d) synthesis of the findings, (e) need for
further research, and (f) leadership-development programs at benchmark universities.
Each area is explored in depth below. The literature review was instrumental in the
formulation of research questions established for the study, which are presented
following the summary of the literature review.
Conceptual Framework
Evaluation models. Several models are available to evaluate processes, but not
all adapt easily to evaluation of training. For example, Stufflebeam’s context, input,
process, and product model is considered useful because it helps the evaluator generate
interview or survey questions using systems orientation (Haynes & Ghosh, 2008).
However, the context, input, process, and product model requires refinement to each
specific training program to respond to trends, philosophies, and perspectives (Horng,
Teng, & Baum, 2009). A similar model, context, inputs, reactions, and outcomes,
partially addresses weaknesses in the Kirkpatrick model with regard to the lack of
preliminary assessment of business requirements but does not assess business impact
(Elliott, Dawson, & Edwards, 2009).
Another example of a process-evaluation model is Brinkerhoff’s success case
model (as cited in Bersin, 2008). Bersin (2008) pointed out that the success case model
can be used to identify successful trainees but “does not attempt to build a complete end-
to-end measurement model” (p. 70). Fitzpatrick et al. (2004) suggested using
Kirkpatrick’s model for results-based evaluation and supplementing with Brinkerhoff’s
success case model approach to identify other issues.
Bersin (2008) pointed out the two most popular models to date were Kirkpatrick’s
11
four levels of evaluation and Phillips’s fifth level of ROI, which expands on Kirkpatrick’s
model. Phillips (as cited in Bersin, 2008) attempted to add on where he believed
Kirkpatrick left off. However, Bersin stated both models “limit an organization’s thinking
and make the measurement process difficult to implement” (p. 4). Elliott et al. (2009)
also contended that the surveys Phillips used to determine ROI were subject to personal
bias. Bersin suggested that ROI results are at best difficult, if not impossible, to correlate
or attribute to training. Bersin found ROI limiting because “it requires measurement of
business impact over time and computation of costs over time, both very difficult to
measure reliably and even harder to measure consistently across every training program”
(p. 61).
Bersin’s (2008) own model, the impact-measurement framework, extends the
Kirkpatrick model by building on the principles of Kirkpatrick. The impact-measurement
framework expands into nine measurement areas, from which the evaluator can choose
those appropriate to the specific program being evaluated. At the time of this study, no
peer-reviewed literature was found that involved the use of the impact-measurement
framework as an evaluation tool.
Criticism of Kirkpatrick’s four levels. For many years, Kirkpatrick’s four levels
of evaluation was the only formal model available to evaluate employee training. It has
been criticized as being oversimplified (Bates, 2004; Galloway, 2005). Galloway (2005)
contended that Kirkpatrick assumed generalization to larger populations, as well as
narrowly attributing outcomes to training effectiveness, without regard for other
variables. Bates (2004) noted the four-level model does not consider the influence of the
individual. Bates listed other limitations:
12
There are at least three limitations of Kirkpatrick’s model that have implications
for the ability of training evaluators to deliver benefits and further the interests of
organizational clients. These include the incompleteness of the model, the
assumption of causality, and the assumption of increasing importance of
information as the levels of outcomes are ascended. (p. 342)
Bersin (2008) also contended that Kirkpatrick’s model is incomplete and that the
four levels are not causally related. For example, satisfaction or reaction does not
necessarily correlate or lead to learning. In addition, learning does not necessarily lead to
behavioral changes. Galloway (2005) criticized the second level of evaluation, learning,
in that it does not recognize “barriers to transfer” (p. 23); further, self-assessments are
subject to bias, thus calling into question the reliability of such an assessment.
Bates (2004) asserted that the Kirkpatrick model neglects perceptions and
expectations of training by stakeholders. Elliott et al. (2009) also claimed that, with
Kirkpatrick, training is not evaluated in the context of organizational or business needs.
Bersin (2008) similarly alleged that the model does not justify how training impacts the
organization.
Because of the shortcomings of the model, there are serious concerns about the
ability of an organization to fully utilize the model as intended. Cohen’s (2005) research
disclosed that less than 15% of organizations that engaged in Kirkpatrick’s four levels of
evaluation actually followed through with Levels 3 and 4. Cohen also maintained the
Kirkpatrick model does not require a crucial component to third-level evaluation, which
is the use of control groups. In smaller organizations, once the target population has
moved through the training, control groups may not be possible. Cohen also claimed a
missing component in Kirkpatrick’s fourth level evaluation is the use of longitudinal
surveys. Determining long-term effects, such as assessed by longitudinal evaluation, is
the most problematic aspect of evaluation (Ding, 2009).
13
Support for Kirkpatrick’s four levels. Criticisms notwithstanding, Lin, Chen,
and Chuang (2011) referred to Kirkpatrick’s four-level model as the most well-known
and universally used in performance evaluation. Even though Bates (2004) was critical of
the model overall, he recognized the popularity of use is due to the simplicity it offers.
Elliott et al. (2009) called the model a “simple, yet effective evaluation system” (p. 658).
Bersin (2008) also acknowledged Kirkpatrick’s model as “widely understood” (p. 57) and
valuable as a “thinking tool” (p. 58); Bersin built his impact-measurement framework
upon the principles of the Kirkpatrick model. Rouse (2011) stated, “Kirkpatrick's
evaluation framework provides an excellent framework to determine strengths and
weaknesses of . . . instruction” (p. 3). Galloway (2005) attributed the present and
enduring popularity of the four levels to “the ease by which strategic alignment can be
measured” (p. 22). By and large, the general consensus is that the Kirkpatrick model is
straightforward and popular for that reason.
The intention for the evaluation process is to measure and evaluate the
effectiveness of training, and because the training is expected to be repeated, the
evaluation process needs to be repeatable as well (Bersin, 2008). After the completion of
this initial study, the Office of Human Resources at the study university intended to
continue evaluation. As the university has not previously formally evaluated training, a
simplified, repeatable process like the Kirkpatrick model would be a solid framework
from which to start.
Synthesis of Findings
Due to its simplicity and ability to be repeated throughout the lifecycle of the
training program, Kirkpatrick’s four levels of evaluation (reaction, learning, behavior,
14
results) seemed to be the best framework by which to evaluate the leadership training
program at the study university. Some of the earlier criticism of the framework, such as
not assessing business impact, has been addressed by Kirkpatrick in recent years. The
concept of ROE was introduced, requiring the evaluator to understand expectations of the
training program by the stakeholders, and has been added as a precursor activity to Level
1 as well as measurement and review during Level 4 (D. Kirkpatrick & Kirkpatrick,
2007; J. Kirkpatrick & Kirkpatrick, 2010).
Using the phrase, “start with the end in mind” (D. Kirkpatrick & Kirkpatrick,
2007, p. 108), evaluators are encouraged to ask stakeholders to define what would
constitute successful outcomes. The evaluator then can build metrics around those
outcomes to determine effectiveness of training, according to stakeholder definitions.
Reporting back to the stakeholders with the appropriate metrics allows them to make the
final determination as to whether training could be considered effective.
Donald Kirkpatrick originally developed the four levels specifically to evaluate
supervisory training, although the model is deemed effective with many well-defined
strategic training needs (J. Kirkpatrick, 2007). The leadership training program at the
study university includes supervisor, manager, and leadership concepts, with the intent of
advancing organizational goals. The Kirkpatrick model therefore provided an appropriate
evaluation framework.
Need for Further Research
A growing trend among organizations is for trainers to demonstrate alignment of
training, training outcomes, and organizational goals (Abrell et al., 2011; Holzer, 2012;
Phillips, Brantley, & Phillips, 2011; Warrick, 2011). Prior to the current economic
15
downturn, trainers were asked to provide specific training but relied only on smile or
reaction surveys and headcount as proof of effectiveness. Today, organizations want to
know not only the cost of training but also the returns gained (Warrick, 2011). Holzer
(2012) and other researchers have begun to highlight the importance of ensuring a link
between training programs and employer needs. Phillips et al. (2011) cited the top reason
for training project failure is the lack of alignment to business goals from the outset.
According to Phillips et al.,
The end must be specified in terms of business needs and business measures so
that the outcome—the actual improvement in the measures—and the
corresponding ROI are clear. This establishes the expectations throughout the
analysis and project design, development, delivery, and implementation stages. (p.
53)
This is why Phillips et al., Bersin (2008), and others have attempted to expand upon the
current models for evaluating training, although there is still not much evidence in the
literature documenting successful evaluations against organizational goals.
Leadership Programs and Assessments at Other Universities
Leadership development historically has focused on four specific areas: (a)
decision making, (b) conceptual differences between leaders and managers, (c) team
building, and (d) feedback (Boaden, 2006). Outcomes of leadership-development training
are important but difficult to measure (Boaden, 2006). Organizations commit to
leadership training but often fall short in providing an environment in which the
developed leadership can succeed (Boaden, 2006). Boaden (2006) stated,
The lack of scholarly knowledge about leadership development is again cited
here, but it is also pointed out that most leadership models were developed for a
more stable and predictable environment than now, highlighting the need for
further understanding of the relationship between leadership and change. (p. 10)
Ruben (2005) observed that most academic and administrative units within
16
colleges and universities have many initiatives that they do not have the resources to
accomplish. It is therefore vital that university leaders are clear in the prioritization of
initiatives and, for those who are first-time leaders, are coached in the process.
Consulting firm Korn/Ferry International, global provider of leadership-development
consulting, has worked with many organizations including colleges and universities in
creating or evolving leadership-development programs. Six universities were presented as
benchmark institutions by Korn/Ferry consultants Uher and Johnson (2011): (a) Emory
University, (b) Rice University, (c) Cornell University, (d) University of Virginia, (e)
Duke University, and (f) Vanderbilt University.
Emory University. Emory University, a private research university located in
Atlanta, Georgia, as of Fall 2012 had student enrollment of 14,236 and a faculty and staff
headcount of 13,012 (Emory University, 2012b). Research funding totaled $518.6 million
in Fiscal Year 2012. The university was ranked 20th among national universities in the
U.S. News & World Report (2012) ranking.
Emory provides multiple levels of professional development training,
differentiating between supervisory, managerial, and leadership. Participation in the
Excellence Through Leadership program is limited to nomination and selection, also
meeting the following criteria: (a) current position is director (or equivalent) or higher,
(b) completed a minimum of 1 year full-time service at Emory, (3) considered a high
performer, and (d) regarded as a high-potential candidate for future advancement at the
university (Emory University, 2012a). Participants must commit to attend all sessions,
and sponsors must attend the orientation session with the participant. The program is 1
year, and participants are limited to 15 to 25 per cohort year. Smaller groups allow time
17
for individual assessment and smaller workgroups for team projects. Facilitation is
provided by university faculty considered subject-matter experts in the various topics.
The program has four main areas of focus: (a) people focus, (b) personal focus,
(c) business focus, and (d) results focus (Emory University, 2012a). People focus centers
on interpersonal skills and people management. Personal focus concerns personal
effectiveness and integrity. Business focus highlights strategic thinking and
organizational excellence. Results focus centers on results in terms of organizational
goals and initiatives. Actual university issues are assigned to teams as projects, with
deliverables presented to university senior leaders for approval. If approved, solutions are
implemented.
Success of the program is measured by the retention rate of participants, as well
as documented career progress, both internally and externally, where feasible to ascertain.
Feedback from program participants offered that the additional projects were too time
consuming in addition to participants’ normal workload, and it was later learned some
potential participants are opting out of the program as a result. Other feedback praised the
team projects because they were real-world challenges to solve and offered participants
the opportunity to immediately apply learning (Uher & Johnson, 2011).
Rice University. Rice University is a private research university located in
Houston, Texas. As of Fall 2011, Rice student enrollment was 6,082, with support faculty
and staff of 2,850 (Rice University, 2011). Rice University’s vision, Vision for the
Second Century, includes a 10-point plan adopted by the Rice University Board of
Trustees in 2005. In response to the call to action, the RiceLeaders program was
codeveloped the following year by the university vice president for administration, the
18
associate vice president for human resources, and a faculty subject-matter expert from the
university’s Jones Graduate School of Business. RiceLeaders was launched in 2007
(Kirby, 2011).
Key topic areas in the program include (a) self-awareness-based leadership and
assessments, (b) creativity, (c) communication, (d) strategy, and (e) teams. In addition,
the program includes coaching and action-learning projects to apply knowledge and skills
learned in training on the job. The first goal of the program is to improve collective
leadership competence at the university. The second goal is to deeply connect
participants with the university vision and strategy. The third goal is to garner support for
change throughout the university, starting with participants’ sphere of influence. The
fourth and final goal is to “knit the university together” (Kirby, 2011, p. 5).
Participants in RiceLeaders are identified by the amount of influence versus the
actual position held (Kirby, 2011). Cohorts of 25 to 30 participants are chosen from a
broad spectrum of departments and asked to consider carefully the commitment prior to
accepting a seat. The program entails 12 days of participation over a 12-month period.
Four modules 2 to 2.5 days long are spaced out every 3 to 4 months. The modules are (a)
Fundamentals of Leadership, (b) Teams, (c) Creativity, and (d) Strategy Implementation
(Kirby, 2011).
Fundamentals of Leadership focuses on core concepts of leadership and self-
awareness (Kirby, 2011). Self-assessment instruments are used and scored for
participants to understand their own style and strengths. The session is held off campus
and overnight to foster team building among the cohort participants. The Teams module
covers the dynamics of work teams. The focus is on what makes teams effective and how
19
leadership can influence teams to maximize effectiveness. In the Creativity session,
participants are guided through interpretation and change management in support of the
Vision for the Second Century. Strategy Implementation shares ways in which
participants can navigate through higher education’s propensity for consensus and
consultation and “get stuff done” (Kirby, 2011, p. 6).
In addition to the modules, participants are coached in individual sessions by an
organizational psychologist, assigned to action-learning project teams of five participants,
and participate in three to five community-building events over the course of the
program. These additional activities are intended to encourage networking, apply
knowledge on the job, and foster relationships through a sense of community. The action-
learning projects include a list of university issues, such as helping to define the Vision
for the Second Century criteria for success.
Evaluation of the program began 3 years after the first cohort completed the
program (Uher & Johnson, 2011). The first focus group was conducted with a third of the
original 100 participants. The results yielded four main themes: (a) New skills and
behaviors were learned, (b) business strategy was better understood, (c) participants did
not substantially change how they contributed to the university but overall became more
participative, and (d) new work relationships were formed. In addition to focus groups,
Rice University staff began reaction surveys after each cohort. They also began tracking
promotions of participants as an indicator of training effectiveness (Uher & Johnson,
2011).
Cornell University. Cornell University is a private, Ivy League research
university. Cornell has a main campus located in Ithaca, New York, with two medical
20
campuses located in New York City and abroad in Qatar. As of 2011, the student
enrollment was 22,254 across all three campuses (Cornell University, 2011). The
headcount of faculty and staff was 9,645, also across all three campuses. Ranked 15th
among national universities by the U.S. News & World Report in 2012, Cornell is known
for research and filing and receiving a high level of patents.
In response to the worsening economy, in 2010 Cornell launched a new 5-year
business plan to bring the university to its sesquicentennial anniversary. The foundation
concept of the plan is “One Cornell” (Cornell University, 2010, p. 2), intended to focus
resources to position the university for excellence in priority areas and ensure the
financial health of the university. Through five overarching goals, five objectives, and
seven strategic initiatives, Cornell University (2010) proposed to attract the best and most
diverse students, faculty, and staff and to offer excellence in teaching and research. The
plan delineates more than a dozen core metrics to be tracked over the life of the 5-year
plan, including detailed qualitative and quantitative indicators for each metric.
Once the framework had been laid, the university needed to engage the faculty
and staff in support of the plan. In order to translate and implement the plan
operationally, there must be strong leadership. “Having strong leaders at Cornell is
essential if the university is to embrace the theme of the university's new strategic plan”
(Doolittle, 2008, para.1). Leadership development was identified as a key component to
successfully carrying out the business plan.
Cornell’s Office of Organizational Development Services created and
implemented leadership-development opportunities through programs including the
Supervisory Development Certificate and the Management Academy. These programs
21
are for staff at various levels of leadership and at various stages in their careers, including
frontline supervisors and midlevel managers. The Harold D. Craft Leadership Program is
available by supervisor or unit manager nomination only, offering more advanced
leadership concepts to high performing, high potential individuals.
The capstone course called Leading Cornell is for high-performing individuals
with senior leadership potential at a particular position and at least 1 year of service at
Cornell. Candidates are nominated by deans or vice presidents in partnership with human
resources, with consideration of qualifications including (a) senior leadership potential,
(b) significant accomplishments, and (c) projected future contributions as identified by
the nominator (Cornell University, 2012). The program is limited to 25 participants and
runs parallel to the academic year.
With a focus on application of learning to the workplace, the program description
offers, “Participants will work on projects that are important to the university while at the
same time developing practical leadership and management experiences, with the goal of
preparing participants to fill key positions as openings occur” (Cornell University, 2012,
para. 2). To illustrate, the 2010-2011 participants were asked to interview university
leaders, faculty, and staff; review the 5-year business plan; and make recommendations to
further the plan through presentations to the president and provost. Doolittle (2011)
quoted an assistant dean, who said, “The strategic plan now needs to be translated at the
unit level . . . so that people can see themselves in the plan” (para. 7).
Success of the program is determined by combination of participants and project
results. Participants offer feedback about the experience, including whether or not they
would recommend participation by others. Senior leaders evaluate the success of the
22
program by the team project deliverables, whether they hold merit and provide real
solutions to organizational issues (Uher & Johnson, 2011).
University of Virginia. Founded in 1819 by Thomas Jefferson, the University of
Virginia is a public institution located in Charlottesville, Virginia. Student enrollment as
of 2011 was 24,297, with a faculty and staff headcount of 7,979 (University of Virginia,
2012a). In the 2012 U.S. News & World Report list of best colleges, the University of
Virginia ranked second in the category of Top Public Schools and 25th out of 50 in the
category of Best National Universities.
In 2007, the university reconvened its Commission on the Future of the
University to consider strategies to combat the uprising of world competition in higher
education and to discern ways to distinguish the University of Virginia among other
local, national, and international institutions. The Commission on the Future of the
University (2008) offered, “Our strategy is to strengthen our core resources while
strategically funding selected new efforts that will further distinguish the University” (p.
4). Strong and consistent leadership would be necessary to carry forth new initiatives,
often achieved by training and development efforts. Recognizing this, University of
Virginia President Casteen (2008) said, “This new generation of capable, visionary
leaders will guide the University into the next decade and beyond” (para. 7).
The Leadership Development Center is a division of the university Human
Resources unit and focuses on development opportunities, tools, and strategies that
support individual professional development. The Leadership Development Center
provides a variety of programs based on the various stages of employee development.
The programs include (a) Supervisory Essentials for newly appointed supervisors, (b)
23
Managing at the University of Virginia for all levels of supervisor positions, (c)
Managing the University of Virginia Way for supervisors and managers with a minimum
of 3 years of experience, (d) Leadership Strategies for more experienced managers, and
(e) the Executive Onboarding Program to acclimate executives new to the University of
Virginia.
Leadership Strategies is considered a managerial development program, offered
once per year in cohort format (University of Virginia, 2012b). Candidates must be
nominated by a manager and must meet specific criteria: (a) hold director level position
or above, (b) have 5 years or more of management experience, and (c) have multiple
supervisees as well as a broad span of influence.
The objectives of the program include individual development by use of
comprehensive, 360-degree feedback; addressing of leadership issues with resident
subject-matter experts; and completion of a project related to the university’s mission and
strategies. In addition, cohorts are given private, interactive audiences with senior
administrators from across campus to learn about issues specific to the administrators’
areas. For example, the 2012 cohort met with President Teresa Sullivan who discussed
the vision for the University of Virginia. Other meeting topics included budget and
financial concerns, student experience, 2012 legislative session and impacts to the
University of Virginia, leadership transition and change management, as well as
university goals and organizational alignment (University of Virginia, 2012b).
Success of the program is measured by an increase in readiness and engagement
of participants (Uher & Johnson, 2011). In addition, the program is expected to provide
positive internal and external promotion. The key measurement of effectiveness is
24
resolution of institutional issues via action-learning projects.
Duke University. Located in Durham, North Carolina, Duke University
originated as Trinity College in 1838, later expanding into Duke University named for its
major benefactor, James B. Duke, in 1924. Today the university encompasses 10 colleges
and schools and the Duke University Health System. Student enrollment in 2011 was
14,746 students, and the faculty and staff headcount (including the Health System) was
34,366 (Duke University, 2012).
In 2004, Duke University began an intensive planning period, culminating in a
new strategic plan adopted by the Board of Trustees in 2006. Called Making a Difference,
the Duke University (2006) plan aimed to enhance university academic excellence, while
continuing to capitalize on the strengths of the university’s reputation in collaboration
and connection through diversity (Burness, 2006). Six goals are intended to ensure the
strategic plan meets the vision.
1. The first goal is to “increase the capacity of our faculty to develop and
communicate disciplinary and interdisciplinary knowledge” (Duke University, 2006, p.
25). The overall strategic plan designates funds to support an increased effort at recruiting
and retaining superior faculty.
2. “Strengthen the engagement of the university in real world issues” (Duke
University, 2006, p. 33) is a recommitment to their specific strengths as a university and
to build upon them in service to society.
3. The third goal is to “attract the best graduate and professional students and
fully engage them in the creation and transmission of knowledge” (Duke University,
2006, p. 39). The university will focus more attention on inclusion and fostering sense of
25
community among graduate students.
4. The fourth goal is to “foster in undergraduate students a passion for learning
and a commitment to making a difference in the world” (Duke University, 2006, p. 41).
Through innovating teaching methods, students will be empowered to participate more in
their own education, and connect with the community at large in the process.
5. “Transform the arts at Duke” (Duke University, 2006, p. 47) means enhancing
programs and opportunities across disciplines.
6. The final goal is to “lead and innovate in the creation, management, and
delivery of scholarly resources in support of teaching and research” (Duke University,
2006, p. 51). The university will provide enhanced support to the libraries and
information technology in order to augment academic endeavors (Duke University,
2006).
The strategic plan outlines clear definitions of success and assessment strategies.
For example, for the first goal to increase faculty quality, Duke will strive to ensure 75%
of faculty hires will be in fields of strategic importance. Along with this indicator, nine
other assessment strategies will be employed to gauge the success of this one goal, with
nine identified key expectations or outcomes.
The latest iteration of the university’s mission statement was adopted by the
Board of Trustees in 1994 and included the directive from founder James B. Duke that
states members of the university are “to provide real leadership in the educational world”
(Duke University Board of Trustees, 2001, para. 2) and “maintain a place of real
leadership in all that we do” (para. 4). The overarching theme at Duke University is to
demonstrate leadership into the next generation through the action items of the strategic
26
plan.
To support that effort, Duke University Human Resources offers an array of
training and professional development opportunities designed for different audiences.
The Professional Development Institute offers a First Time Supervisors program. It is a
highly selective program, with only 20 participants maximum. Consideration for the
program is contingent on (a) manager nomination, (b) 3 years of full-time service at Duke
University, (c) 2 consecutive years of meeting or exceeding performance expectations,
and (d) acceptance of a retention agreement of 2 years following completion of the
program. In addition, the candidate’s manager must identify a mentor, another manager
or supervisor with at least 5 years of service at Duke University and willing to meet with
mentees and attend sessions as required.
The next stage offering of leadership development is the Duke Leadership
Academy. Focused on leadership and behaviors to implement business strategies, this
highly selective program requires nomination from a vice president or dean at Duke.
Over 12 months, participants experience curriculum as a mix of classroom learning,
exposure to senior leaders at Duke, individual and 360-degree assessments, coaching, and
practical application of theory to current Duke University challenges. The program is
based on best practices from the curriculum codeveloped by Duke’s Fuqua School of
Business, Kenan Institute for Ethics, and University Athletics. This codeveloped program
is referred to as the Fuqua/Coach K Center on Leadership and Ethics (COLE). The COLE
curriculum is known for (a) developing leaders of consequence, (b) being a knowledge
source, (c) being a community builder, and (d) providing a developmental portal to
integrate students from knowledge to application (Fuqua/Coach K COLE, 2012).
27
Coach K, short for Coach Mike Krzyzewski, is recognized for his leadership and
his ability to coach leadership from others. Coach K has been a successful basketball
coach for over 30 years and has learned many lessons from interactions with so many
different players over the years. In an interview with Sitkin and Hackman (2011), Coach
K said that he never wants to force someone else to try to lead in his same style; he wants
to help others learn what their style is. Similarly, Duke President Richard Brodhead
wants to help emerging leaders at Duke find their style and how it can ultimately help
Duke move forward as an organization. At the conclusion of the 3rd year of Duke
Leadership Academy, Green (2012) captured a quote from President Brodhead to
participants:
In my job, leadership is about meeting everyone, listening to everyone, then
putting their ideas back out to them in terms of the mission of Duke. . . . People
want to hear you put their aspirations into words so they can act on them. (para. 3)
If the definition of servant leadership is a leader who is attentive to the growth
and development of those with whom they work, then the style of current leadership at
Duke University appears to be servant leadership. Through the offering of the Duke
Leadership Academy, senior administrators are demonstrating the spirit to cultivate
servant leadership throughout the organization, and into the future, planning for the long-
term success of Duke University. According to Uher and Johnson (2011), success of the
Duke Leadership Academy is measured by increased engagement and commitment to
Duke. It is also measured by strengthened leadership capabilities and the personal
development plans created for each participant during the course of the program. The
keys to success are the use of best practices from the COLE program, access to university
and community leaders, and the opportunity to utilize new skills through action-learning
projects.
28
Vanderbilt University. Founded in 1873, Vanderbilt University is an
independent, private research university in Nashville, Tennessee. Known for research and
development, Vanderbilt University secured just over $0.5 billion in Fiscal Year 2011,
ranking 20th among U.S. colleges and universities in federal research funding and in the
top 10 in National Institutes of Health research (Zeppos, 2012). Student enrollment 2011-
2012 was 12,859, a little more than half undergraduate (Vanderbilt University, 2012b).
Vanderbilt was ranked 17th in the U.S. News & World Report (2012) Best Colleges
category and ranked in 10 other categories. The total number of employees in Fiscal Year
2011, including the medical center, was 23,834, making Vanderbilt University the second
largest employer in the state of Tennessee.
As of 2012, six major initiatives at Vanderbilt were intended to move the
university toward its vision: (a) enhanced financial aid, (b) College Halls at Vanderbilt,
(c) graduate education, (d) international education, (e) research, and (f) energy and
environment. The aim of enhanced financial aid was to remove financial barriers for
students and replace need-based student loans with grant and scholarship assistance. This
required Vanderbilt to reallocate funding where possible and attract new scholarship
endowment, in the quest to strengthen the policy of admitting students based on merit
rather than financial status (Vanderbilt University Office of the Chancellor, 2012). The
College Halls initiative was a transformation of residential life where select faculty would
live in apartments among the students to create a more bonded community and best
experience for 1st-year students, in particular. Graduate education is strongly connected
to research and thus a natural progression. Interrelated is international education.
Vanderbilt endeavors to increase not only recruiting of international students but also
29
research relationships, collaborations, and funding opportunities from international
sources. Even though Vanderbilt already has a strong reputation for research, there is still
room to grow and garner more research dollars. The final initiative, energy and
environment, involved university recycling and carpooling efforts, but more can be done.
The university has begun green building efforts and wants to improve education and
individual efforts at conservation.
In terms of the university’s mission, goals, and values, Vanderbilt University
statements are focused and succinct. Along with scholarly research, creative teaching and
service to society, the quest for new knowledge will continue by virtue of the values of
“open inquiry, equality, compassion and excellence in all endeavors” (Vanderbilt
University, 2015, para. 2). In a state-of-the-university address, Chancellor Zeppos (2012)
commended the university community for exercising good fiscal judgment, which has
allowed Vanderbilt to continue to invest in its students and its own future. Although
many peer universities were dealing with budgetary cuts and forced reduction in
enrollment and staffing, Vanderbilt’s student enrollment increased, research funding
increased, and alumni and generous benefactors increased their financial support over
prior years. This abundance is allowing Vanderbilt to continue to reinvest in its own
infrastructure and in talent development.
The administration of Vanderbilt has been focusing on overarching themes as the
backdrop to progress strategic priorities. The themes include One University,
collaboration, and leadership excellence. Most universities are splintered, with schools
and colleges within the university competing with each other for scarce resources.
Vanderbilt, through a sense of community, prefers instead to focus on leveraging
30
synergies between colleges to get the most out of resources. The initiative of One
University will take collaboration and leadership excellence to bring to fruition
(Patterson, 2009). At a Faculty Senate meeting in 2012, Chancellor Zeppos said that the
university would stand or fall based on the quality of its leaders (Vanderbilt University
Faculty Senate, 2012). What follows is then investment in university talent, through
training and development.
To ensure consistency in support of the university mission and goals, the
Organizational Effectiveness unit of Human Resources provides programs and courses in
leadership development. At the entry level, the Human Resources Leadership Foundation
Series includes five modules intended to help participants develop supervisory skills: (a)
Attracting, Hiring, and Retaining New Staff; (b) Targeted Selection (for hiring managers
only); (c) Developing and Coaching Staff; (d) Managing Performance and Behavior; and
(e) Legal Issues (Vanderbilt University, 2012a). Modules can be taken in any order and
with no specified time limitation but rather at the convenience of the individual. Course
instructors are members of the Organizational Effectiveness team, Human Resources, and
General Counsel.
Other individual leadership or supervisory sessions are available, including topics
such as becoming a leader, performance conversations, change and transition, project
management, change and transition, and conflict. Leadership training is also provided
under the heading of Health and Wellness for Faculty and Staff, including role change for
newly promoted leaders, team leadership, and dealing with conflict as a leader.
An offering at the highest level is the Leadership Academy. The Leadership
Academy threads the concepts of One University, leadership excellence, and
31
collaboration through eight sessions spanning 6 months. There is a kick-off meeting, a
final celebration meeting, and six sessions devoted to topics surrounding best practices in
leadership. The sessions are offered in sequence, building on concepts in progression
(Uher & Johnson, 2011).
The program kick-off is an orientation session, which includes an overview of the
program and the Vanderbilt Commitment. The Month 1 training session is a day-long
session entitled Leading Self. This session includes four topics of focus: (a) leadership
excellence at Vanderbilt, (b) the leadership journey, (c) growing self-awareness for
effective leadership, and (d) self-development and the art of learning agility. These four
topics are centered on understanding one’s self in terms of style and perceptions. The
Month 2 session is a day-long continuation session of Leading Self, including the topics
of (a) the power of influence and inspiration, (b) personal vitality, and (c) elements of
leader presence. Building on the previous session, Month 2 explores the translation of
personal leadership style to sphere of influence (Uher & Johnson, 2011).
Month 3 begins a segment called Leading Others. The session includes a dinner
event, and the topics are (a) orchestration, (b) bringing out the best in others, and (c)
teachable point of view. These topics cover the art of organizing and inspiring individual
contributions. Month 4 is a continuation of Leading Others, including the following
topics: (a) real conversations, (b) tackling conflict, (c) building effective teams, (d)
creating vital teams, and (e) individual contributor to a manager. This day-long session
helps leaders to create teams and address inevitable conflict (Uher & Johnson, 2011).
Month 5 begins the segment called Leading the Institution. The day-long session
covers topics of (a) institutional success, (b) leadership networking (over cocktails and
32
dinner), (c) the business of higher education, and (d) problem resolution. This session
introduces issues and concerns of importance to the organization and what to do about
them (Uher & Johnson, 2011).
The Month 6 session is 2 days and includes (a) the student or patient experience,
(b) the business of higher education, (c) high-level organizational structure, and (d) the
leadership challenge. This session brings the focus to the highest level of oversight,
understanding the business in general and the relationship to Vanderbilt University. The
final session is a graduation celebration, also covering the topic of community
connection. This session celebrates the successful completion of the program by
participants and introduces community involvement for consideration (Uher & Johnson,
2011).
Throughout the program, a formal accountability and assessment process is
capturing data to determine the effectiveness of the program. Participants are given
prework, assignments and action-learning projects that are monitored for rates of
completion and degree of accuracy. After-learning technology is enabled, which is the
dissemination of online postsession testing. In addition to data collection, group
debriefings are held. Coaching and mentoring are provided to help solidify information
and help participants to shape new skills. Finally, 360-degree assessments are
administered, including peers, coaches, and senior administrators, to determine whether
leaders have begun to demonstrate the level of leadership this program was designed to
instill (Uher & Johnson, 2011).
Summary of the Literature
Among the many models of program evaluation, consensus in the literature has
33
been that the Kirkpatrick model appears to be the most common option for evaluating
employee training, especially if flexibility and future replication of evaluation processes
are considerations. The Kirkpatrick model is commonly used because of the
straightforwardness and agreement of the four levels. Training outcomes must include
learning and behavioral aspects. Regarding reactions, D. Kirkpatrick and Kirkpatrick
(2007) underscored the importance of measuring satisfaction of participants, because
future training depends on positive reactions not just of participants but of their managers
as well. Given that so many organizations do go through the effort of reaction surveys
(Cohen, 2005), this is a widely embraced concept.
From the review of benchmark universities in leadership development (Uher &
Johnson, 2011), three main components of successful programs were shared in common
among the identified organizations. The first is clearly defined expectations of the
outcomes of the program. One expectation is to confirm those individuals who are truly
potential leaders, not just high performers, to help the stock the pipeline of future leaders.
The second component is to utilize the program to have these high-potential high
performers resolve organizational issues through action-learning activities in the
program. The use of action-learning activities makes it much easier to identify the ROI of
the program. The third component of benchmark leadership-development programs is the
differentiation in the program between leadership and managing. All six universities
discussed have separate supervising, managing, emerging leaders, and leadership
programs. For leadership, this helps keep the content focused on high-level oversight
unique to the executive level.
34
Research Questions
Within the framework of Kirkpatrick’s four levels of evaluation, the research
questions in support of this study reflect the four levels of reaction, learning, behavior,
and results. At Level 1, two questions were asked. Research Question 1 was the
following: What outcomes are the stakeholders anticipating as a result of supervisors
participating in the university’s leadership training program? These expectations were
likely to be reflective of the universities vision and strategic priorities. Research Question
2 was the following: Did participants in the training program react favorably enough to
recommend training to others? The expectation was that content would be considered
useful by participants, so that they would recommend training to their subordinates.
The Level 2 question was Research Question 3: Did participants understand and
retain the desired learning outcomes, as specified by the training module content
measured by pre- and posttraining testing? The expectation was that the information
would be retained.
The Level 3 question was Research Question 4: Are participants observed by their
managers applying learning outcomes in the work environment?
The Level 4 question was Research Question 5: Is the training having a
measureable impact on the organization and meeting stakeholder expectations? If
stakeholder expectations are aligned with the organization’s strategic priorities, the
training can be molded according to goals as they adjust over time.
35
Chapter 3: Methodology
Program
Training was evaluated at a not-for-profit, private university headquartered in the
southeastern United States. The university was chartered in 1964, founded by educators
who had innovative ideas for providing higher education opportunities to students in
physical and social sciences via long distance, challenging the convention of brick-and-
mortar establishments long before personal computing took hold.
In the next 48 years, the university would grow to 18 colleges and schools,
offering 144 programs in undergraduate, graduate, and first-professional degrees. At the
time of this study, student enrollment was approximately 28,000, locally and at sites in 23
states, making the university the eighth largest not-for-profit, independent university in
the United States. The headcount of supporting faculty, staff, and administration was just
over 4,000 as of 2012. With a new president at the helm, the university has an aggressive
new organizational vision and 10-year business plan, which requires mobilization of the
university’s workforce. Bacharach (2007) stated, “Leadership is about coming up with a
viable agenda, getting people behind your initiative, and sustaining momentum so people
will stay on your side and bring ideas to fruition” (p. 14). Leadership plays a pivotal role
in marshalling an organization’s workforce; therefore, by extension leadership
development is a critical component as well.
The university leadership-development training is five competency-based
modules intended to provide participants with enough knowledge and overview of skills
to help them competently perform in their roles and encourage competency from the
individuals reporting directly to them (direct reports). The five modules are (a)
36
Communication in the Workplace, (b) Managing Conflict and Change, (c) Performance
Management, (d) Project Management and Measurement, and (e) Visioning and
Planning.
The Communication in the Workplace module encompasses facets of effective
interpersonal interactions to address clarity in verbal and written forms and skills in
listening and collaboration. Topics include active listening, verbal and written
communication, barriers to effective communication, and interpersonal skills to enhance
communication in the workplace.
The module entitled Managing Conflict and Change addresses desired and
undesired effects of change. Facilitators discuss the critical role of conflict in change
efforts, as well as components of an effective change plan, and offer role-playing
exercises to practice reframing techniques.
The Performance Management module centers on interviewing strategies,
employee motivation and recognition, delegation, coaching, and feedback. Participants
are provided with strategies to craft interview questions using behavior to predict future
performance. Facilitators also provide an overview of motivation strategies for enhanced
employee performance as well as strategies to recognize and reward employees.
In the Project Management and Measurement module, participants are provided
the steps in project planning and risk analysis. Facilitators present the steps in project
scheduling, including detailed benchmarks and time lines. Participants are expected to
identify methods to document activities related to projects and discuss a variety of
metrics for measuring change stemming from project implementation.
The Visioning and Planning module provides an overview of leadership theories
37
and processes. Facilitators outline the steps involved in strategic planning, best practices
in team building, and factors involved in effective decision making.
Participants
The target population for this study included all participants in the leadership-
development training program at the study university. This population included
supervisors, managers, directors, and every level of administrator who managed the
activities of three or more direct reports at the university. University supervisors
represented 18 colleges, schools, and centers as well as 14 nonacademic support centers.
Of the 650 supervisory positions at the university in 2012, when the leadership-
development training program was initiated, 407 (63%) were identified as having three or
more direct reports and thus were required to participate. Supervisors who had fewer than
three direct reports were required to attend training after the first group completed
training.
Training participants were surveyed for knowledge pre- and postintervention.
They were also surveyed postintervention regarding the level of their satisfaction with,
and reaction to, the training. They were surveyed to assess meeting of executive
expectations postintervention as well.
Participants in the evaluation of the leadership-development training program
included training participants, supervisors of the training participants, and the direct
reports of the training participants. A 360-degree approach to the evaluation, involving
surveying managers and subordinates of training participants, attempted to overcome
perception distortion by allowing for a variety of sources (D. Kirkpatrick & Kirkpatrick,
2007).
38
Evaluation Model
The evaluation framework model used for this study was Kirkpatrick’s four levels
of evaluation (see Figure 1). The four levels are reaction, learning, behavior, and results.
Figure 1. Kirkpatrick’s four levels of evaluation grouped by individual or organizational influence.
Level 1, reaction, refers to participants’ postintervention level of satisfaction with
the training, including satisfaction with content, satisfaction with content for supervisors,
perceived competency of the facilitator, length of time devoted to the training, and
helpfulness of materials provided. Reaction adds to the “chain of evidence” (D.
Kirkpatrick & Kirkpatrick, 2007, p. 123) that builds through the four levels and
ultimately provides data to decide on the effectiveness of the intervention. D. Kirkpatrick
and Kirkpatrick (2007) indicated that the degree to which participants find the
intervention relevant is critical to actual learning and later application of the learning on
the job.
Level 2, learning, measures whether participants acquired new (or increased)
knowledge. For this study, learning was determined by comparing pre- and
postintervention test results. Test content was developed from the learning objectives of
the leadership-development training modules. Statistical analysis of the net scores, the
Individual Organizational
Level 1: Reaction Level 2: Learning Level 3: Behavior Level 4: Results
Assesses
satisfaction with
 Content
 Facilitator
 Time
 Relevancy
Measures:
 Knowledge
 Statistical
Comparison of
Pre- and
Postsurveys
360-degree
surveys:
 Supervisor
 Direct Reports
Statistical
comparison
between
measurements and
expected outcomes
39
difference between the pre- and posttests, tested for significance of change correlated to
the intervention.
According to D. Kirkpatrick and Kirkpatrick (2007), surveys and questionnaires
are two methods of evaluating application of new behaviors on the job to obtain Level 3
data posttraining. D. Kirkpatrick and Kirkpatrick (2007) recommended 360-degree
evaluations, soliciting feedback from superiors and subordinates of training participants
to identify whether new behaviors introduced during training are demonstrated on the
job. Feedback is provided in a confidential manner, to avoid conflicts and promote trust
in the process (Demirkaya, 2007). For this study, a 360-degree evaluation was employed,
through which supervisors and direct reports were surveyed using online questionnaires,
which allowed anonymous responses. The survey participants were asked if new or
increased behaviors for skills related to specific training modules were being observed.
Level 4, results, assesses workplace metrics such as engagement, turnover, and
other such measurements compared against a baseline assessment to determine whether
training can be correlated to improvement in individual and organizational performance.
For this study, results were to be evaluated through analysis of actual outcomes to the
expected outcomes articulated by university executives. Statistically significant changes
toward expected outcomes were to be the determining factor as to whether the training
intervention could be considered effective or not. Both summative and formative, the
outcomes would inform the training coordinator where and how the program might
require modification.
Instruments
Leadership training participants were surveyed at different times in the training
40
cycle in accordance with the first three of Kirkpatrick’s four levels of evaluation. Level 1,
reaction, was captured by survey of participants’ overall reaction to the training
immediately upon completion of the individual sessions (Appendix D). Level 2, learning,
was captured by pre- and postintervention knowledge testing at intervals of 1 week prior
to the training and 1 week after the training. Survey items were structured as multiple
choice or true–false, based on the content of each training module (Appendix E). Testing
before training could help facilitators understand the level of knowledge participants had
prior to entering training and therefore update the content accordingly to information that
would be considered new. In addition, testing 1 week after the intervention could help
identify what new information was understood, as well as what information actually
stayed with participants into the near future.
Level 3, behavior, was measured through surveys administered to participants’
supervisors and subordinates several weeks after completion of the entire curriculum of
training (Appendix F). The survey included Likert-scale rated questions intended to
determine if participants were displaying new or increased behaviors on the job
postintervention. Level 4, overall results, was intended to compare organizational
baseline data obtained prior to the initiation of training, such as data from employee
engagement surveys and employee data, to postintervention data.
Procedures
Design. Survey items for Levels 1 and 3 were based on 5-point Likert-scale
ratings intended to measure reaction to content, relevance to participants’ jobs, facilitator
effectiveness, and materials provided. Survey items associated with Level 2 included
multiple-choice and true–false questions. Responses were intended to be in the form of
41
quantifiable metrics for ease of comparison for statistical analyses.
Data collection procedures. The ObjectPlanet Opinio online survey software
application was used for collecting Levels 1–3 survey data from training and nontraining
participants (such as supervisors and direct reports of training participants). Level 4 data
were to be collected from institutional data provided from university records. Employee
engagement data would be provided from the Office of Institutional Effectiveness. Other
employee metrics, such as data regarding turnover rates and the number and nature of
employee relations issues, would be provided by the Office of Human Resources.
Data analysis. Nonparametric analysis was used for comparisons of Likert-scale
questions. Chi-square tests of independence were used for a majority of the statistical
comparisons. For each question on the pre- and posttraining learning surveys, the number
of correct and incorrect responses for all participants was recorded in a 2 x 2 table as
shown in Table 1.
Table 1
Model of Comparison Analysis for Level 2 Pre- and Posttraining Knowledge Testing
Question 1 Pretraining survey Posttraining survey Totals
Correct responses A B A + B
Incorrect responses C D C + D
Total A + C B + D N
From Table 1, the example of potential number of correct responses from all
participants in the pretraining survey is given by A. A + C sums to the total number of
participants for the pretraining survey (assessing knowledge before the training).
Similarly, B + D represents the total number of participants for the posttraining survey
(assessing knowledge after the training). N is the total number of surveys administered.
42
For example, if 100 subjects participated in the pretraining survey, and the same 100
subjects participated in the posttraining survey, the total N would be 200, which would
represent the total number of surveys, not the total number of participants (still 100).
The null hypothesis established for the study was as follows: There is not a
statistically significant difference at the .05 level in the pretraining and posttraining
knowledge of participants (i.e., learning is not correlated with training).
Examples of response distributions. Table 2 represents an example distribution
of responses to Item 1 of the learning surveys (pre- or posttraining, as the questions are
identical). In this example, the chi-square statistic (χ2
= 0.57, with 1 degree of freedom)
would not be significant at the .05 level (p = .4503), which indicates that the null
hypothesis could not be rejected. There would be no evidence of a statistically significant
correlation between learning and training.
Table 2
Example of Level 2 Learning Survey Responses Indicating No Evidence of Learning
Item 1 Pretraining survey Posttraining survey Totals
Correct responses 30 35 65
Incorrect responses 70 65 135
Total 100 100 200
Table 3 represents a similar example distribution of responses to Item 1 of the
learning surveys (pre- or posttraining, as the questions are identical). However, in this
example, the chi-square statistic (χ2
= 4.8, with 1 degree of freedom) would be significant
at the .05 level (p = .0285), indicating the null hypothesis could be rejected. This would
be evidence of a statistically significant correlation between learning and training for
Item 1.
43
Table 3
Example of Level 2 Learning Survey Responses Indicating Evidence of Learning
Item 1 Pretraining survey Posttraining survey Totals
Correct responses 30 45 75
Incorrect responses 70 55 125
Total 100 100 200
Logistic regression performed on the example in Table 3 yielded the same chi-
square statistic of 4.8 with 1 degree of freedom and p = .0285, indicating the model is
significant at the .05 level. The null hypothesis would be rejected, and evidence would
suggest a correlation between participation in training and receiving a correct response
for Item 1. In addition, logistic regression would produce an odds ratio equal to 1.909,
indicating the odds of receiving a correct response for Item 1 increase by 91% when
taking the training.
44
Chapter 4: Results
Boaden (2006) contended that evaluation processes are considered an
instrumental part of any development program. After 2 years and 33 training sessions
held as part of the leadership-development program at the study university, considerable
data have been collected using the framework of Kirkpatrick’s four levels of evaluation.
Each level of evaluation corresponds to specific research questions and surveys outlined
in this study.
Population and Data Collection
The participants of the leadership-development training were employees of the
university who supervised or managed the activities of three or more direct reports. There
were 407 supervisors identified to participate; at the time of this study, 211 completed all
five modules. Surveys were administered to participants of the individual sessions of
each module, which occurred six and seven times during the 2-year period from
November 2012 through June 2014 (see Table 4).
Table 4
Training Intervention Session Dates by Module
Year
Project
Management &
Measurement
Visioning &
Planning
Performance
Management
Managing
Conflict &
Change
Communication
in the Workplace
2012 Nov. 16 Dec. 7 Nov. 30
2013 Jan. 18 Feb. 1 Mar. 15 Feb. 22 Jan. 11
June 14 May 24 June 7 May 31 Mar. 8
Oct. 25 Oct. 4 Oct. 11 Oct. 18 May 17
Nov. 22 Nov. 15 Dec. 6 Dec. 13
2014 Feb. 28 Feb. 7 Mar. 14 Feb. 14 Jan. 17
May 23 June 6 June 20 May 9 Mar. 21
May 16
45
All training participants were invited to take an online pretest the week before
each session and a posttest the week after, as well as an online reaction survey the day the
session was attended. Across all sessions, the attendance rate was approximately 75% of
registered participants, and the average survey participation rate was 35%. These surveys
constituted Level 1 and Level 2 of Kirkpatrick’s four levels of evaluation.
Behavioral observation by supervisors and direct reports of training participants,
Level 3 participation, was requested only of those participants who completed all five
modules. Of the 211 participants who completed the training program, 28 granted
permission to survey their supervisors and direct reports. The 28 who granted permission
yielded a survey population of 183 direct reports and 23 supervisors. Of those, 85 direct
reports (46% participation rate) and 13 supervisors (56% participation rate) responded to
the behavioral observation survey (Appendix F).
Research Question 1 was addressed through an interview survey of stakeholder
expectations (Appendix B). Research Question 2 was addressed through the Level 1
reaction survey (Appendix D). Research Question 3 was addressed through the Level 2
knowledge pre- and posttraining tests (Appendix E). Research Question 4 was addressed
through the Level 3 survey of behavior (Appendix F). Each research question has
corresponding surveys and response sets for each module of the training intervention
aggregated from each session of each module. Response sets were collected through
online surveys for each session of each training module.
Data cleaning. Before performing statistical analyses, null observations were
identified. Null observations occurred when a survey response had a “completed”
response set status and date stamp, but no answers were chosen for any of the questions.
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia
Leadership Development Program Evaluation Dissertation FINAL_Venezia

More Related Content

Similar to Leadership Development Program Evaluation Dissertation FINAL_Venezia

Christopher Allen Final Dissertation
Christopher Allen Final DissertationChristopher Allen Final Dissertation
Christopher Allen Final DissertationChristopher Allen
 
Jackson_Yvonne_Final_Study_2016
Jackson_Yvonne_Final_Study_2016Jackson_Yvonne_Final_Study_2016
Jackson_Yvonne_Final_Study_2016Yvonne Jackson
 
Practicum Evaluation Paper-Riverpoint Writer
Practicum Evaluation Paper-Riverpoint WriterPracticum Evaluation Paper-Riverpoint Writer
Practicum Evaluation Paper-Riverpoint WriterLydia (Lydge) Veluz Reyes
 
Evaluation of the Focused Reading Intervention Program for Middle.pdf
Evaluation of the Focused Reading Intervention Program for Middle.pdfEvaluation of the Focused Reading Intervention Program for Middle.pdf
Evaluation of the Focused Reading Intervention Program for Middle.pdfReynaldo Calo
 
Attributes of quality programs
Attributes of quality programsAttributes of quality programs
Attributes of quality programsMónica Urigüen
 
The Individual Development Plan for Postdoctoral Professional Development
The Individual Development Plan for Postdoctoral Professional DevelopmentThe Individual Development Plan for Postdoctoral Professional Development
The Individual Development Plan for Postdoctoral Professional Developmentauthors boards
 
local_medjdjdjssjdia8573697915448051940.pptx
local_medjdjdjssjdia8573697915448051940.pptxlocal_medjdjdjssjdia8573697915448051940.pptx
local_medjdjdjssjdia8573697915448051940.pptxTrisciaJoyNocillado
 
Capstone - College Staff Perceptions of Student Success - Lucy Romao Vandepol
Capstone - College Staff Perceptions of Student Success  - Lucy Romao VandepolCapstone - College Staff Perceptions of Student Success  - Lucy Romao Vandepol
Capstone - College Staff Perceptions of Student Success - Lucy Romao VandepolLucy Romao Vandepol
 
Reflective Commentary
Reflective Commentary Reflective Commentary
Reflective Commentary susan70
 
Community Health Assessment Evaluation Report FINAL
Community Health Assessment Evaluation Report FINALCommunity Health Assessment Evaluation Report FINAL
Community Health Assessment Evaluation Report FINALJessica Tokunaga
 
Conducting An Action Research.pptx
Conducting An Action Research.pptxConducting An Action Research.pptx
Conducting An Action Research.pptxWilliamBulligan1
 
Decision Making as Professional Nursing Reflection Discussion.pdf
Decision Making as Professional Nursing Reflection Discussion.pdfDecision Making as Professional Nursing Reflection Discussion.pdf
Decision Making as Professional Nursing Reflection Discussion.pdfsdfghj21
 
Pokhara Technical School-Tracer study final report 2016
Pokhara Technical School-Tracer study final report 2016Pokhara Technical School-Tracer study final report 2016
Pokhara Technical School-Tracer study final report 2016Pokhara Technical School
 
self-concept research among orphans USIM by Lt. Mustaza Abu Bakar
self-concept research among orphans USIM by Lt. Mustaza Abu Bakarself-concept research among orphans USIM by Lt. Mustaza Abu Bakar
self-concept research among orphans USIM by Lt. Mustaza Abu BakarAlexander Graham Bell
 
Mentors Facilitating The Success Of Disadvantaged Students
Mentors Facilitating The Success Of Disadvantaged StudentsMentors Facilitating The Success Of Disadvantaged Students
Mentors Facilitating The Success Of Disadvantaged StudentsClayton State University
 

Similar to Leadership Development Program Evaluation Dissertation FINAL_Venezia (20)

Christopher Allen Final Dissertation
Christopher Allen Final DissertationChristopher Allen Final Dissertation
Christopher Allen Final Dissertation
 
Jackson_Yvonne_Final_Study_2016
Jackson_Yvonne_Final_Study_2016Jackson_Yvonne_Final_Study_2016
Jackson_Yvonne_Final_Study_2016
 
Final Capstone
Final CapstoneFinal Capstone
Final Capstone
 
Practicum Evaluation Paper-Riverpoint Writer
Practicum Evaluation Paper-Riverpoint WriterPracticum Evaluation Paper-Riverpoint Writer
Practicum Evaluation Paper-Riverpoint Writer
 
Evaluation of the Focused Reading Intervention Program for Middle.pdf
Evaluation of the Focused Reading Intervention Program for Middle.pdfEvaluation of the Focused Reading Intervention Program for Middle.pdf
Evaluation of the Focused Reading Intervention Program for Middle.pdf
 
Mashiur rahman
Mashiur rahmanMashiur rahman
Mashiur rahman
 
Attributes of quality programs
Attributes of quality programsAttributes of quality programs
Attributes of quality programs
 
The Individual Development Plan for Postdoctoral Professional Development
The Individual Development Plan for Postdoctoral Professional DevelopmentThe Individual Development Plan for Postdoctoral Professional Development
The Individual Development Plan for Postdoctoral Professional Development
 
local_medjdjdjssjdia8573697915448051940.pptx
local_medjdjdjssjdia8573697915448051940.pptxlocal_medjdjdjssjdia8573697915448051940.pptx
local_medjdjdjssjdia8573697915448051940.pptx
 
Capstone - College Staff Perceptions of Student Success - Lucy Romao Vandepol
Capstone - College Staff Perceptions of Student Success  - Lucy Romao VandepolCapstone - College Staff Perceptions of Student Success  - Lucy Romao Vandepol
Capstone - College Staff Perceptions of Student Success - Lucy Romao Vandepol
 
Ej773198
Ej773198Ej773198
Ej773198
 
Reflective Commentary
Reflective Commentary Reflective Commentary
Reflective Commentary
 
Overallanalysisofpracticum
OverallanalysisofpracticumOverallanalysisofpracticum
Overallanalysisofpracticum
 
Community Health Assessment Evaluation Report FINAL
Community Health Assessment Evaluation Report FINALCommunity Health Assessment Evaluation Report FINAL
Community Health Assessment Evaluation Report FINAL
 
Conducting An Action Research.pptx
Conducting An Action Research.pptxConducting An Action Research.pptx
Conducting An Action Research.pptx
 
Decision Making as Professional Nursing Reflection Discussion.pdf
Decision Making as Professional Nursing Reflection Discussion.pdfDecision Making as Professional Nursing Reflection Discussion.pdf
Decision Making as Professional Nursing Reflection Discussion.pdf
 
Pokhara Technical School-Tracer study final report 2016
Pokhara Technical School-Tracer study final report 2016Pokhara Technical School-Tracer study final report 2016
Pokhara Technical School-Tracer study final report 2016
 
Educator Toolbox
Educator ToolboxEducator Toolbox
Educator Toolbox
 
self-concept research among orphans USIM by Lt. Mustaza Abu Bakar
self-concept research among orphans USIM by Lt. Mustaza Abu Bakarself-concept research among orphans USIM by Lt. Mustaza Abu Bakar
self-concept research among orphans USIM by Lt. Mustaza Abu Bakar
 
Mentors Facilitating The Success Of Disadvantaged Students
Mentors Facilitating The Success Of Disadvantaged StudentsMentors Facilitating The Success Of Disadvantaged Students
Mentors Facilitating The Success Of Disadvantaged Students
 

Leadership Development Program Evaluation Dissertation FINAL_Venezia

  • 1. Evaluation of an Innovative Leadership-Development Program at a Private, Not-for-Profit University by Renee Venezia An Applied Dissertation Submitted to the Abraham S. Fischler School of Education in Partial Fulfillment of the Requirements for the Degree of Doctor of Education Nova Southeastern University 2015
  • 2. ii Approval Page This applied dissertation was submitted by Renee Venezia under the direction of the persons listed below. It was submitted to the Abraham S. Fischler School of Education and approved in partial fulfillment of the requirements for the degree of Doctor of Education at Nova Southeastern University. Barbara Packer-Muti, EdD Date Committee Chair Dian Moorhouse, EdD Date Committee Member Ronald J. Chenail, PhD Date Interim Dean
  • 3. iii Statement of Original Work I declare the following: I have read the Code of Student Conduct and Academic Responsibility as described in the Student Handbook of Nova Southeastern University. This applied dissertation represents my original work, except where I have acknowledged the ideas, words, or material of other authors. Where another author’s ideas have been presented in this applied dissertation, I have acknowledged the author’s ideas by citing them in the required style. Where another author’s words have been presented in this applied dissertation, I have acknowledged the author’s words by using appropriate quotation devices and citations in the required style. I have obtained permission from the author or publisher—in accordance with the required guidelines—to include any copyrighted material (e.g., tables, figures, survey instruments, large portions of text) in this applied dissertation manuscript. Signature Renee T. Venezia Name April 25, 2015 Date
  • 4. iv Acknowledgments Ralph Waldo Emerson is credited with the adage that it is not about the destination, but rather the journey. This journey has not been a solo venture. I have had guidance and assistance every step of the way. If this endeavor could be likened to running a marathon, then the people in my life have been part of Team Venezia. My brother, Joseph Venezia, has been my special running partner who showed up every day I needed him. He ran every single mile with me, pointing out alternatives in the trail that proved to be better routes. My dissertation chair, Dr. Barbara Packer-Muti, was my trainer, preparing and enabling me to reach the goal. My mother, Marguerite Venezia, has been my personal assistant, taking care of my daily needs to allow me to focus on my training. All my family and friends were cheerleaders along the whole 26 miles; their names all belong on this work for I could not have done it without them. They have my gratitude always. To my father, John Venezia, who always believed I could achieve anything: This is for you, Poppy.
  • 5. v Abstract Evaluation of an Innovative Leadership Development Program at a Private, Not-for- Profit University. Renee Venezia, 2015: Applied Dissertation, Nova Southeastern University, Abraham S. Fischler School of Education. ERIC Descriptors: Leadership Training, Program Evaluation, Management Development, Universities This applied dissertation was designed to determine the effectiveness of employee leadership training at a private, not-for-profit university. The goal of the study was to provide leaders at the university with evaluative information using the Kirkpatrick 4-level evaluation model regarding the effectiveness of a new leadership-development training program starting at the university for 400+ supervisors and managers. Literature supports the need for program evaluation, but employee training programs tend to be superficially evaluated, leaving executives without sufficient data to decide if the training was effective and if so, to what extent the organization benefits from the investment. If structured well, this study would serve as a model for future training evaluation at this university. The evaluation was based on Kirkpatrick’s 4 levels of evaluation; training participants were surveyed to determine reaction, learning, and behavior. Survey responses were analyzed to determine Level 4, results. Participants in the study were university managers and supervisors with 3 or more subordinates. Study results showed overall satisfaction with training by participants, evidence of learning, and training behaviors observed on the job by supervisors and direct reports of participants, but lack of evidence to confirm the training meets executive stakeholder expectations.
  • 6. vi Table of Contents Page Chapter 1: Introduction....................................................................................................... 1 Statement of the Problem........................................................................................ 1 Program................................................................................................................... 4 Purpose of the Evaluation....................................................................................... 5 Definition of Terms................................................................................................. 6 Chapter 2: Literature Review.............................................................................................. 8 Conceptual Framework......................................................................................... 10 Synthesis of Findings............................................................................................ 13 Need for Further Research.................................................................................... 14 Leadership Programs and Assessments at Other Universities.............................. 15 Summary of the Literature.................................................................................... 32 Research Questions............................................................................................... 34 Chapter 3: Methodology ................................................................................................... 35 Program................................................................................................................. 35 Participants............................................................................................................ 37 Evaluation Model.................................................................................................. 38 Instruments............................................................................................................ 39 Procedures............................................................................................................. 40 Chapter 4: Results............................................................................................................. 44 Population and Data Collection ............................................................................ 44 Research Question 1 ............................................................................................. 46 Research Question 2 ............................................................................................. 47 Research Question 3 ............................................................................................. 56 Research Question 4 ............................................................................................. 63 Research Question 5 ............................................................................................. 75 Chapter 5: Discussion ....................................................................................................... 76 Overview of the Study .......................................................................................... 76 Findings in Relation to Research Questions ......................................................... 76 Implications........................................................................................................... 81 Conclusions........................................................................................................... 82 Limitations............................................................................................................ 83 Recommendations for Further Research............................................................... 85 References......................................................................................................................... 87 Appendices A University Vision and Mission ...................................................................... 93 B Training Value Interview Instrument ............................................................ 95 C University 2010–2020 Business Plan Overview ........................................... 97
  • 7. vii D Leadership Training: Level 1 Reaction Survey............................................. 99 E Leadership Training: Level 2 Knowledge Test ........................................... 122 F Leadership Training: Level 3, 360-Degree Survey: Observable On-the- Job Behavioral Changes .............................................................................. 130 Tables 1 Model of Comparison Analysis for Level 2 Pre- and Posttraining Knowledge Testing........................................................................................ 41 2 Example of Level 2 Learning Survey Responses Indicating No Evidence of Learning..................................................................................................... 42 3 Example of Level 2 Learning Survey Responses Indicating Evidence of Learning......................................................................................................... 43 4 Training Intervention Session Dates by Module ........................................... 44 5 Executive Stakeholder Expectations of the Leadership-Development Program.......................................................................................................... 47 6 Level 1 Evaluation: Satisfaction With Course Content and Importance of Content to Supervisors, by Module ............................................................... 55 7 Project Management and Measurement Module: Pre- and Posttest Response Frequency and Percentage............................................................. 56 8 Chi-Square Statistical Analysis of Project Management and Measurement Pre- and Posttest Responses.................................................... 57 9 Project Management and Measurement Pre- and Posttest Parameter Estimate and Wald Significance Test............................................................ 57 10 Performance Management Module: Pre- and Posttest Response Frequency and Percentage ............................................................................. 58 11 Chi-Square Statistical Analysis of Performance Management Pre- and Posttest Responses......................................................................................... 58 12 Performance Management Pre- and Posttest Parameter Estimate and Wald Significance Test.................................................................................. 58 13 Managing Conflict and Change Module: Pre- and Posttest Response Frequency and Percentage ............................................................................. 59 14 Chi-Square Statistical Analysis of Managing Conflict and Change Pre- and Posttest Responses .................................................................................. 59 15 Managing Conflict and Change Pre- and Posttest Parameter Estimate and Wald Significance Test........................................................................... 60 16 Communication in the Workplace Module: Pre- and Posttest Response Frequency and Percentage ............................................................................. 60 17 Chi-Square Statistical Analysis of Communication in the Workplace Pre- and Posttest Responses........................................................................... 61 18 Communication in the Workplace Pre- and Posttest Parameter Estimate and Wald Significance Test........................................................................... 61 19 Visioning and Planning Module: Pre- and Posttest Response Frequency and Percentage............................................................................................... 61 20 Chi-Square Statistical Analysis of Visioning and Planning Pre- and Posttest Responses......................................................................................... 62
  • 8. viii 21 Visioning and Planning Pre- and Posttest Parameter Estimate and Wald Significance Test ........................................................................................... 62 22 Level 2 Evaluation: Pre- and Posttest Percentage of Correct Answers by Module........................................................................................................... 63 23 Project Management Behavioral Survey Response Frequency, All Respondents................................................................................................... 64 24 Project Management and Measurement Chi-Square Analysis: Direct Reports Versus Supervisors........................................................................... 65 25 Project Management and Measurement Chi-Square Analysis: Training Participants Versus Supervisors .................................................................... 65 26 Project Management and Measurement Chi-Square Analysis: Training Participants Versus Direct Reports................................................................ 66 27 Performance Management Behavioral Survey Response Frequency, All Respondents................................................................................................... 66 28 Performance Management and Measurement Chi-Square Analysis: Direct Reports Versus Supervisors................................................................ 67 29 Performance Management Chi-Square Analysis: Training Participants Versus Supervisors ........................................................................................ 67 30 Performance Management Chi-Square Analysis: Training Participants Versus Direct Reports.................................................................................... 68 31 Managing Conflict Behavioral Survey Response Frequency, All Respondents................................................................................................... 68 32 Managing Conflict and Change Chi-Square Analysis: Direct Reports Versus Supervisors ........................................................................................ 69 33 Managing Conflict and Change Chi-Square Analysis: Training Participants Versus Supervisors .................................................................... 69 34 Managing Conflict and Change Chi-Square Analysis: Training Participants Versus Direct Reports................................................................ 70 35 Communication in the Workplace Behavioral Survey Response Frequency, All Respondents.......................................................................... 70 36 Communication in the Workplace Chi-Square Analysis: Direct Reports Versus Supervisors ........................................................................................ 71 37 Communication in the Workplace Chi-Square Analysis: Training Participants Versus Supervisors .................................................................... 71 38 Communication in the Workplace Chi-Square Analysis: Training Participants Versus Direct Reports................................................................ 72 39 Visioning and Planning Behavioral Survey Response Frequency, All Respondents................................................................................................... 72 40 Visioning and Planning Chi-Square Analysis: Direct Reports Versus Supervisors .................................................................................................... 73 41 Visioning and Planning Chi-Square Analysis: Training Participants Versus Supervisors ........................................................................................ 73 42 Visioning and Planning Chi-Square Analysis: Training Participants Versus Direct Reports.................................................................................... 74
  • 9. ix 43 Percentages of Observer Groups Reporting Positive Behavioral Observations of Training Participants Compared to Training Participants’ Positive Reactions .................................................................... 74 44 Level 3 Evaluation: Self-Perception Responses to Specific Executive Expectations................................................................................................... 75 Figures 1 Kirkpatrick’s Four Levels of Evaluation Grouped by Individual or Organizational Influence ............................................................................... 38 2 Distribution of Responses to Item 2 of the Project Management and Measurement Section of the Reaction Survey: “How Satisfied Were You With the Content of the Project Management and Measurement Module?” ....................................................................................................... 48 3 Distribution of Responses to Item 7 of the Project Management and Measurement Section of the Reaction Survey: “How Satisfied Were You That the Information Provided Is Important for Someone in a Supervisory Role at the University?” ............................................................ 49 4 Distribution of Responses to Item 2 of the Performance Management Section of the Reaction Survey: “How Satisfied Were You With the Content of the Performance Management Module?” .................................... 50 5 Distribution of Responses to Item 7 of the Performance Management Section of the Reaction Survey: “How Satisfied Were You That the Information Provided Is Important for Someone in a Supervisory Role at the University?”......................................................................................... 50 6 Distribution of Responses to Item 1 of the Managing Conflict and Change Section of the Reaction Survey: “How Satisfied Were You With the Content of the Managing Conflict and Change Module?” ...................... 51 7 Distribution of Responses to Item 7 of the Managing Conflict and Change Section of the Reaction Survey: “How Satisfied Were You That the Information Provided Is Important for Someone in a Supervisory Role at the University?”................................................................................. 52 8 Distribution of Responses to Item 1 of the Communication in the Workplace Section of the Reaction Survey: “How Satisfied Were You With the Content of the Communication in the Workplace Module?” ......... 52 9 Distribution of Responses to Item 7 of the Communication in the Workplace Section of the Reaction Survey: “How Satisfied Were You That the Information Provided Is Important for Someone in a Supervisory Role at the University?” ............................................................ 53 10 Distribution of Responses to Item 2 of the Visioning and Planning Section of the Reaction Survey: “How Satisfied Were You With the Content of the Visioning and Planning Module?”......................................... 54 11 Distribution of Responses to Item 7 of the Visioning and Planning Section of the Reaction Survey: “How Satisfied Were You That the Information Provided Is Important for Someone in a Supervisory Role at the University?”......................................................................................... 55
  • 10. 1 Chapter 1: Introduction Statement of the Problem The topic. The study university is a private, not-for-profit institution of higher education that launched a mandatory leadership-development training program for all supervisors who have three or more direct reports. At the time the study commenced, the university was in its 48th year and under new leadership, with a new vision and a 10-year business plan (beginning in 2010) to achieve the goal of becoming recognized as a premier institution by 2020. Essential to this effort was a presidential initiative to develop talent from within the organization through training. The expectation of this training is to develop future organizational leaders as well as mobilize employees in support of the organizational vision (see Appendix A). The research problem. As the program was new for the university, administrators wanted to know whether the training was effective. During the pilot phase of the program, the evaluation process assessed participant reaction and self-reporting as to whether there was an increase in learning. To determine overall program effectiveness, administrators required a more extensive assessment, including observations of training participants to determine whether they were actually applying the new knowledge on the job. In addition, administrators wanted to know whether training leadership personnel resulted in measureable impact in support of the university’s vision, mission, values, and goals. According to Chong (2005), “Decisions should be made based on careful observations and a clear understanding of the relevant factors involved, as well as the goals of the learning. As is the case with formal training programs, results should be carefully evaluated” (p. 18). To make educated decisions about modifications or even
  • 11. 2 continuation of the leadership training program, administrators would require sufficient data and feedback, which was the basis for the study. Background and justification. Because of the current economic climate, as well as predictions of a qualified workforce shortage in the future, organizations are investing more in employee training (Ashford & DeRue, 2010). Establishing effective corporate learning programs is essential to maximizing training investment (Stevens & Frazer, 2005). To determine effectiveness of the training, organizations must measure return on investment (ROI) through evaluation and assessment (Cohen, 2005). “In today’s tough business climate, it is imperative that learning professionals link learning initiatives to business goals and prove their value in this new workplace” (D. Kirkpatrick, 2010, p. 16). Deficiencies in the evidence. The existing body of literature supporting the importance of evaluating training effectiveness is extensive, particularly using D. Kirkpatrick’s (2006) four-level model. However, there is an absence of documented research efforts demonstrating the application of the third and fourth levels of the model, specifically including the use of control groups. The third and fourth levels, behavior and results, have garnered less than 15% utilization by organizations, mostly because of the complexity involved in using control groups (Cohen, 2005). In a 2011 study by the American Society of Training and Development, 91% of organizations surveyed used the Kirkpatrick four-level model, although only 35% engaged in the fourth-level evaluation of results (Galagan, 2011). In the 1980s, a fifth level was developed, ROI calculation, but Galagan (2011) questioned executives about the value of such calculation. Most chief executive officers agreed that change in behavior and improvement in performance are the leading indicators that reflect effective
  • 12. 3 training (Galagan, 2011). J. Kirkpatrick and Kirkpatrick (2010) stated understanding desired business outcomes allows training to be measured against return on expectations (ROE). ROE differs from ROI in that ROI calculations focus on the outcomes of the training versus the expected outcomes for the business (J. Kirkpatrick & Kirkpatrick, 2010). Audience and stakeholders. At the time the study was conducted in 2012, the university employed over 4,000 faculty, staff, and administrators. Of this population, approximately 650 supervised one or more employees, and 407 supervised three or more employees. This latter population of supervisors was mandated to participate in training. Future participants would include all supervisors (newly hired and promoted), and participation would be mandated. Those benefiting from this study include future training participants, as the training may improve as each evaluation cycle reports strengths, areas of weakness, and suggestions for modification and enhancement. Participants’ direct supervisors, those who report to the training participants, as well as the organization as a whole would benefit, as improved training should yield greater effectiveness in the form of increased employee knowledge and application of knowledge to the work environment. Overall, the university should see a benefit in terms of positive impact to the university’s vision and goals. Stakeholders include training participants, as well as their supervisors and subordinates who might be indirectly affected by the participants’ possible increased knowledge. Other stakeholders include the Board of Trustees and university executive leadership, who required an assessment of the instructional modules, facilitators, and
  • 13. 4 impact of training to determine whether it is having a positive impact on the organization and whether it should be continued or altered. Program Executive expectations. To determine whether training meets executive expectations, three executives of the university—the university president, the executive vice president and chief operating officer, and the provost and executive vice president of academic affairs—were interviewed individually to ascertain what outcomes were expected as a result of the training intervention (see Appendix B). The interviews yielded three major themes: individual development and performance improvement, increased employee engagement, and organizational agility. The executives interviewed expected that the leadership training would affect individual development through an increase in professional development and performance. This was anticipated to assume the form of employees taking more initiative, supervisors delegating more tasks, and increased networking within the various levels of employment hierarchy. These were observable behaviors that could be identified through surveys during the Kirkpatrick third level of evaluation, to be administered posttraining. An improvement in performance also should have been observed and reflected in formal, annual performance reviews by supervisors. The interviewed executives predicted this increase in individual development would lead to increased employee engagement. The university has participated in biannual employee-engagement surveys, and the executives wished to see an increase in engagement scores among those who were engaged, along with a decrease in disengaged employees. As a possible derivative of engagement, executives were hopeful of a
  • 14. 5 decrease in employee turnover, as well as fewer employee-relations issues and grievances. The final expectation of the interviewed executives, organizational agility, was expressed in terms of timelines. The leadership training should correlate with faster decision making, shorter time frames for implementations and projects, and accelerated accomplishment of the university’s 10-year business plan (Appendix C). The business plan includes six measureable, strategic priorities for which the executives would have liked to have seen progress postintervention that correlated to the leadership training. Professional evaluation standards. The researcher followed the standards set by the Joint Committee on Standards for Educational Evaluation (2011). Utility standards are designed to ensure the evaluation is useful to stakeholders. Feasability standards increase effectiveness and efficiency of the evaluation. Propriety relates to fairness and legality. Accuracy standards support clear data collection and reporting. Evaluation accountability standards relate to documentation of the evaluation process. Purpose of the Evaluation The purpose of the study was to evaluate the effectiveness of a new leadership training program at a private, not-for-profit university. Effectiveness was measured by the learning outcomes self-reported by the participants, as well as by observed participant behavioral changes on the job as reported by the participants’ managers, supervisors, and subordinates (here termed direct reports). Other measureable impacts were to include reduced employee turnover, increased engagement, and fewer employee-relations issues. Organizational impacts were to be measured in terms of shorter timelines for decisions and projects as well as faster progression toward the university’s priorities and overall
  • 15. 6 vision. Kirkpatrick’s four levels of evaluation provided the framework of data collection. The four levels are reaction, Level 1; learning, Level 2; behavior, Level 3; and results, Level 4 (D. Kirkpatrick & Kirkpatrick, 2006). To evaluate reaction, participants were asked to complete a postsession survey, quantifiable by degree of satisfaction. Levels 2 and 3 were measured by pre- and postsession knowledge testing. Level 3, behavior, included interviewing the participants and their managers to determine if learning was indeed applied in the workplace. Results, Level 4, was to include an assessment of workplace metrics such as engagement, turnover, and other such measurements compared against a baseline assessment to determine whether training could be correlated to improvement in individual and organizational performance. Definition of Terms Kirkpatrick’s four levels of evaluation. This program-evaluation framework is intended to evaluate reaction, learning, behavior, and results of training and development programs (D. Kirkpatrick & Kirkpatrick, 2007). Level 1 refers to a postintervention reaction survey intended to measure participants’ satisfaction with the intervention. Level 2 refers to the learning or knowledge component of the training or intervention. Level 3 refers to the evaluation of behaviors affected by the intervention. Level 4 refers to the overall results of the first three levels (D. Kirkpatrick & Kirkpatrick, 2007). Intervention. This term refers to the training sessions and materials presented to the participants; postintervention is the time period following the training. Employee turnover. This term refers to the number of employees who leave an organization (either voluntarily or involuntarily) and must be replaced, relative to the
  • 16. 7 number of active positions. Employee engagement. According to Gallup (2009), “Engagement relates to the levels of emotional attachment and productivity within each constituency and each individual constituent” (p. 3). Employees who are engaged tend to be more productive; as a result, often the organization for which they work is more productive (Gallup, 2009).
  • 17. 8 Chapter 2: Literature Review A private, not-for-profit university headquartered in South Florida launched a new leadership training program for executives, managers, and supervisors. At the time of this study, the university was in its 48th year and under new presidential leadership and vision. Training was identified as one of many methods for mobilizing the organization to pursue the new vision. The university documented values and goals in support of the university’s vision and mission statements (see Appendix A). Many of the values reflect transformational leadership theory, endeavoring to foster professional development of staff with the express interest in growing leadership from within. According to Nikolic and Robinson (2012), “Transformational theory focuses on inspiration and empowerment of followers” (p. 101). Nikolic and Robinson further asserted that leadership must influence followers if the organization is to meet its goals. Since the early 1990s, research has supported a connection between transformational leadership behaviors and employee commitment (Dunn, Dastoor, & Sims, 2012). Questions remain, however, whether transformational leadership can be developed at all levels of an organization, not just at the senior level (Northouse, 2004). Organizations today are changing at rapid pace and coping with reduced resources. There is an urgent need for organizations to attract and develop transformational leadership (Warrick, 2011). Abrell, Rowold, Weibler, and Moenninghoff (2011) claimed that transformational leadership is considered beneficial to organizations economically, hence the increasing interest in training and development of such leadership. The American Society for Training and Development (2009) also
  • 18. 9 asserted that executive development is crucial to the success of organizations. Kouzes and Posner (2007) stated it is possible to train and develop anyone for leadership roles, not just those individuals with a natural disposition to leadership. Fitzpatrick, Sanders, and Worthen (2004) observed, “Organizations are more concerned with performance and the impact of training on the organization” (p. 491) and as a result are focusing training on organizational performance indicators. This highlights the need for training and development programs, and the ability to evaluate these programs is fundamental to organizational success (Warrick, 2011). To evaluate training by observation of participants, growth or changes in behaviors must be exhibited and recorded. Phillips (2007) asserted, “To connect to the business impact, the behavior change must link to a clear business consequence” (p. 11). J. Kirkpatrick and Kirkpatrick (2011) pointed out, however, that training initiatives tied to the mission of the organization are only one of many organizational activities that could affect results. “This makes it costly and mathematically impossible to isolate the impact of the training program alone on the organizational results attained” (J. Kirkpatrick & Kirkpatrick, 2011, p. 61). Easier to measure would be a defined set of learning objectives that have been identified by content experts as part of the training curriculum. J. Kirkpatrick and Kirkpatrick (2011) also suggested that stakeholders identify expected outcomes of the intervention and, through the evaluation process, uncover the degree to which expectations have been met. This literature review was designed to inform and direct further development of the study. Relevant areas for exploration in the literature were identified and include (a) discussion of evaluation models, (b) criticisms of Kirkpatrick’s four-level evaluation
  • 19. 10 model, (c) support for the Kirkpatrick model, (d) synthesis of the findings, (e) need for further research, and (f) leadership-development programs at benchmark universities. Each area is explored in depth below. The literature review was instrumental in the formulation of research questions established for the study, which are presented following the summary of the literature review. Conceptual Framework Evaluation models. Several models are available to evaluate processes, but not all adapt easily to evaluation of training. For example, Stufflebeam’s context, input, process, and product model is considered useful because it helps the evaluator generate interview or survey questions using systems orientation (Haynes & Ghosh, 2008). However, the context, input, process, and product model requires refinement to each specific training program to respond to trends, philosophies, and perspectives (Horng, Teng, & Baum, 2009). A similar model, context, inputs, reactions, and outcomes, partially addresses weaknesses in the Kirkpatrick model with regard to the lack of preliminary assessment of business requirements but does not assess business impact (Elliott, Dawson, & Edwards, 2009). Another example of a process-evaluation model is Brinkerhoff’s success case model (as cited in Bersin, 2008). Bersin (2008) pointed out that the success case model can be used to identify successful trainees but “does not attempt to build a complete end- to-end measurement model” (p. 70). Fitzpatrick et al. (2004) suggested using Kirkpatrick’s model for results-based evaluation and supplementing with Brinkerhoff’s success case model approach to identify other issues. Bersin (2008) pointed out the two most popular models to date were Kirkpatrick’s
  • 20. 11 four levels of evaluation and Phillips’s fifth level of ROI, which expands on Kirkpatrick’s model. Phillips (as cited in Bersin, 2008) attempted to add on where he believed Kirkpatrick left off. However, Bersin stated both models “limit an organization’s thinking and make the measurement process difficult to implement” (p. 4). Elliott et al. (2009) also contended that the surveys Phillips used to determine ROI were subject to personal bias. Bersin suggested that ROI results are at best difficult, if not impossible, to correlate or attribute to training. Bersin found ROI limiting because “it requires measurement of business impact over time and computation of costs over time, both very difficult to measure reliably and even harder to measure consistently across every training program” (p. 61). Bersin’s (2008) own model, the impact-measurement framework, extends the Kirkpatrick model by building on the principles of Kirkpatrick. The impact-measurement framework expands into nine measurement areas, from which the evaluator can choose those appropriate to the specific program being evaluated. At the time of this study, no peer-reviewed literature was found that involved the use of the impact-measurement framework as an evaluation tool. Criticism of Kirkpatrick’s four levels. For many years, Kirkpatrick’s four levels of evaluation was the only formal model available to evaluate employee training. It has been criticized as being oversimplified (Bates, 2004; Galloway, 2005). Galloway (2005) contended that Kirkpatrick assumed generalization to larger populations, as well as narrowly attributing outcomes to training effectiveness, without regard for other variables. Bates (2004) noted the four-level model does not consider the influence of the individual. Bates listed other limitations:
  • 21. 12 There are at least three limitations of Kirkpatrick’s model that have implications for the ability of training evaluators to deliver benefits and further the interests of organizational clients. These include the incompleteness of the model, the assumption of causality, and the assumption of increasing importance of information as the levels of outcomes are ascended. (p. 342) Bersin (2008) also contended that Kirkpatrick’s model is incomplete and that the four levels are not causally related. For example, satisfaction or reaction does not necessarily correlate or lead to learning. In addition, learning does not necessarily lead to behavioral changes. Galloway (2005) criticized the second level of evaluation, learning, in that it does not recognize “barriers to transfer” (p. 23); further, self-assessments are subject to bias, thus calling into question the reliability of such an assessment. Bates (2004) asserted that the Kirkpatrick model neglects perceptions and expectations of training by stakeholders. Elliott et al. (2009) also claimed that, with Kirkpatrick, training is not evaluated in the context of organizational or business needs. Bersin (2008) similarly alleged that the model does not justify how training impacts the organization. Because of the shortcomings of the model, there are serious concerns about the ability of an organization to fully utilize the model as intended. Cohen’s (2005) research disclosed that less than 15% of organizations that engaged in Kirkpatrick’s four levels of evaluation actually followed through with Levels 3 and 4. Cohen also maintained the Kirkpatrick model does not require a crucial component to third-level evaluation, which is the use of control groups. In smaller organizations, once the target population has moved through the training, control groups may not be possible. Cohen also claimed a missing component in Kirkpatrick’s fourth level evaluation is the use of longitudinal surveys. Determining long-term effects, such as assessed by longitudinal evaluation, is the most problematic aspect of evaluation (Ding, 2009).
  • 22. 13 Support for Kirkpatrick’s four levels. Criticisms notwithstanding, Lin, Chen, and Chuang (2011) referred to Kirkpatrick’s four-level model as the most well-known and universally used in performance evaluation. Even though Bates (2004) was critical of the model overall, he recognized the popularity of use is due to the simplicity it offers. Elliott et al. (2009) called the model a “simple, yet effective evaluation system” (p. 658). Bersin (2008) also acknowledged Kirkpatrick’s model as “widely understood” (p. 57) and valuable as a “thinking tool” (p. 58); Bersin built his impact-measurement framework upon the principles of the Kirkpatrick model. Rouse (2011) stated, “Kirkpatrick's evaluation framework provides an excellent framework to determine strengths and weaknesses of . . . instruction” (p. 3). Galloway (2005) attributed the present and enduring popularity of the four levels to “the ease by which strategic alignment can be measured” (p. 22). By and large, the general consensus is that the Kirkpatrick model is straightforward and popular for that reason. The intention for the evaluation process is to measure and evaluate the effectiveness of training, and because the training is expected to be repeated, the evaluation process needs to be repeatable as well (Bersin, 2008). After the completion of this initial study, the Office of Human Resources at the study university intended to continue evaluation. As the university has not previously formally evaluated training, a simplified, repeatable process like the Kirkpatrick model would be a solid framework from which to start. Synthesis of Findings Due to its simplicity and ability to be repeated throughout the lifecycle of the training program, Kirkpatrick’s four levels of evaluation (reaction, learning, behavior,
  • 23. 14 results) seemed to be the best framework by which to evaluate the leadership training program at the study university. Some of the earlier criticism of the framework, such as not assessing business impact, has been addressed by Kirkpatrick in recent years. The concept of ROE was introduced, requiring the evaluator to understand expectations of the training program by the stakeholders, and has been added as a precursor activity to Level 1 as well as measurement and review during Level 4 (D. Kirkpatrick & Kirkpatrick, 2007; J. Kirkpatrick & Kirkpatrick, 2010). Using the phrase, “start with the end in mind” (D. Kirkpatrick & Kirkpatrick, 2007, p. 108), evaluators are encouraged to ask stakeholders to define what would constitute successful outcomes. The evaluator then can build metrics around those outcomes to determine effectiveness of training, according to stakeholder definitions. Reporting back to the stakeholders with the appropriate metrics allows them to make the final determination as to whether training could be considered effective. Donald Kirkpatrick originally developed the four levels specifically to evaluate supervisory training, although the model is deemed effective with many well-defined strategic training needs (J. Kirkpatrick, 2007). The leadership training program at the study university includes supervisor, manager, and leadership concepts, with the intent of advancing organizational goals. The Kirkpatrick model therefore provided an appropriate evaluation framework. Need for Further Research A growing trend among organizations is for trainers to demonstrate alignment of training, training outcomes, and organizational goals (Abrell et al., 2011; Holzer, 2012; Phillips, Brantley, & Phillips, 2011; Warrick, 2011). Prior to the current economic
  • 24. 15 downturn, trainers were asked to provide specific training but relied only on smile or reaction surveys and headcount as proof of effectiveness. Today, organizations want to know not only the cost of training but also the returns gained (Warrick, 2011). Holzer (2012) and other researchers have begun to highlight the importance of ensuring a link between training programs and employer needs. Phillips et al. (2011) cited the top reason for training project failure is the lack of alignment to business goals from the outset. According to Phillips et al., The end must be specified in terms of business needs and business measures so that the outcome—the actual improvement in the measures—and the corresponding ROI are clear. This establishes the expectations throughout the analysis and project design, development, delivery, and implementation stages. (p. 53) This is why Phillips et al., Bersin (2008), and others have attempted to expand upon the current models for evaluating training, although there is still not much evidence in the literature documenting successful evaluations against organizational goals. Leadership Programs and Assessments at Other Universities Leadership development historically has focused on four specific areas: (a) decision making, (b) conceptual differences between leaders and managers, (c) team building, and (d) feedback (Boaden, 2006). Outcomes of leadership-development training are important but difficult to measure (Boaden, 2006). Organizations commit to leadership training but often fall short in providing an environment in which the developed leadership can succeed (Boaden, 2006). Boaden (2006) stated, The lack of scholarly knowledge about leadership development is again cited here, but it is also pointed out that most leadership models were developed for a more stable and predictable environment than now, highlighting the need for further understanding of the relationship between leadership and change. (p. 10) Ruben (2005) observed that most academic and administrative units within
  • 25. 16 colleges and universities have many initiatives that they do not have the resources to accomplish. It is therefore vital that university leaders are clear in the prioritization of initiatives and, for those who are first-time leaders, are coached in the process. Consulting firm Korn/Ferry International, global provider of leadership-development consulting, has worked with many organizations including colleges and universities in creating or evolving leadership-development programs. Six universities were presented as benchmark institutions by Korn/Ferry consultants Uher and Johnson (2011): (a) Emory University, (b) Rice University, (c) Cornell University, (d) University of Virginia, (e) Duke University, and (f) Vanderbilt University. Emory University. Emory University, a private research university located in Atlanta, Georgia, as of Fall 2012 had student enrollment of 14,236 and a faculty and staff headcount of 13,012 (Emory University, 2012b). Research funding totaled $518.6 million in Fiscal Year 2012. The university was ranked 20th among national universities in the U.S. News & World Report (2012) ranking. Emory provides multiple levels of professional development training, differentiating between supervisory, managerial, and leadership. Participation in the Excellence Through Leadership program is limited to nomination and selection, also meeting the following criteria: (a) current position is director (or equivalent) or higher, (b) completed a minimum of 1 year full-time service at Emory, (3) considered a high performer, and (d) regarded as a high-potential candidate for future advancement at the university (Emory University, 2012a). Participants must commit to attend all sessions, and sponsors must attend the orientation session with the participant. The program is 1 year, and participants are limited to 15 to 25 per cohort year. Smaller groups allow time
  • 26. 17 for individual assessment and smaller workgroups for team projects. Facilitation is provided by university faculty considered subject-matter experts in the various topics. The program has four main areas of focus: (a) people focus, (b) personal focus, (c) business focus, and (d) results focus (Emory University, 2012a). People focus centers on interpersonal skills and people management. Personal focus concerns personal effectiveness and integrity. Business focus highlights strategic thinking and organizational excellence. Results focus centers on results in terms of organizational goals and initiatives. Actual university issues are assigned to teams as projects, with deliverables presented to university senior leaders for approval. If approved, solutions are implemented. Success of the program is measured by the retention rate of participants, as well as documented career progress, both internally and externally, where feasible to ascertain. Feedback from program participants offered that the additional projects were too time consuming in addition to participants’ normal workload, and it was later learned some potential participants are opting out of the program as a result. Other feedback praised the team projects because they were real-world challenges to solve and offered participants the opportunity to immediately apply learning (Uher & Johnson, 2011). Rice University. Rice University is a private research university located in Houston, Texas. As of Fall 2011, Rice student enrollment was 6,082, with support faculty and staff of 2,850 (Rice University, 2011). Rice University’s vision, Vision for the Second Century, includes a 10-point plan adopted by the Rice University Board of Trustees in 2005. In response to the call to action, the RiceLeaders program was codeveloped the following year by the university vice president for administration, the
  • 27. 18 associate vice president for human resources, and a faculty subject-matter expert from the university’s Jones Graduate School of Business. RiceLeaders was launched in 2007 (Kirby, 2011). Key topic areas in the program include (a) self-awareness-based leadership and assessments, (b) creativity, (c) communication, (d) strategy, and (e) teams. In addition, the program includes coaching and action-learning projects to apply knowledge and skills learned in training on the job. The first goal of the program is to improve collective leadership competence at the university. The second goal is to deeply connect participants with the university vision and strategy. The third goal is to garner support for change throughout the university, starting with participants’ sphere of influence. The fourth and final goal is to “knit the university together” (Kirby, 2011, p. 5). Participants in RiceLeaders are identified by the amount of influence versus the actual position held (Kirby, 2011). Cohorts of 25 to 30 participants are chosen from a broad spectrum of departments and asked to consider carefully the commitment prior to accepting a seat. The program entails 12 days of participation over a 12-month period. Four modules 2 to 2.5 days long are spaced out every 3 to 4 months. The modules are (a) Fundamentals of Leadership, (b) Teams, (c) Creativity, and (d) Strategy Implementation (Kirby, 2011). Fundamentals of Leadership focuses on core concepts of leadership and self- awareness (Kirby, 2011). Self-assessment instruments are used and scored for participants to understand their own style and strengths. The session is held off campus and overnight to foster team building among the cohort participants. The Teams module covers the dynamics of work teams. The focus is on what makes teams effective and how
  • 28. 19 leadership can influence teams to maximize effectiveness. In the Creativity session, participants are guided through interpretation and change management in support of the Vision for the Second Century. Strategy Implementation shares ways in which participants can navigate through higher education’s propensity for consensus and consultation and “get stuff done” (Kirby, 2011, p. 6). In addition to the modules, participants are coached in individual sessions by an organizational psychologist, assigned to action-learning project teams of five participants, and participate in three to five community-building events over the course of the program. These additional activities are intended to encourage networking, apply knowledge on the job, and foster relationships through a sense of community. The action- learning projects include a list of university issues, such as helping to define the Vision for the Second Century criteria for success. Evaluation of the program began 3 years after the first cohort completed the program (Uher & Johnson, 2011). The first focus group was conducted with a third of the original 100 participants. The results yielded four main themes: (a) New skills and behaviors were learned, (b) business strategy was better understood, (c) participants did not substantially change how they contributed to the university but overall became more participative, and (d) new work relationships were formed. In addition to focus groups, Rice University staff began reaction surveys after each cohort. They also began tracking promotions of participants as an indicator of training effectiveness (Uher & Johnson, 2011). Cornell University. Cornell University is a private, Ivy League research university. Cornell has a main campus located in Ithaca, New York, with two medical
  • 29. 20 campuses located in New York City and abroad in Qatar. As of 2011, the student enrollment was 22,254 across all three campuses (Cornell University, 2011). The headcount of faculty and staff was 9,645, also across all three campuses. Ranked 15th among national universities by the U.S. News & World Report in 2012, Cornell is known for research and filing and receiving a high level of patents. In response to the worsening economy, in 2010 Cornell launched a new 5-year business plan to bring the university to its sesquicentennial anniversary. The foundation concept of the plan is “One Cornell” (Cornell University, 2010, p. 2), intended to focus resources to position the university for excellence in priority areas and ensure the financial health of the university. Through five overarching goals, five objectives, and seven strategic initiatives, Cornell University (2010) proposed to attract the best and most diverse students, faculty, and staff and to offer excellence in teaching and research. The plan delineates more than a dozen core metrics to be tracked over the life of the 5-year plan, including detailed qualitative and quantitative indicators for each metric. Once the framework had been laid, the university needed to engage the faculty and staff in support of the plan. In order to translate and implement the plan operationally, there must be strong leadership. “Having strong leaders at Cornell is essential if the university is to embrace the theme of the university's new strategic plan” (Doolittle, 2008, para.1). Leadership development was identified as a key component to successfully carrying out the business plan. Cornell’s Office of Organizational Development Services created and implemented leadership-development opportunities through programs including the Supervisory Development Certificate and the Management Academy. These programs
  • 30. 21 are for staff at various levels of leadership and at various stages in their careers, including frontline supervisors and midlevel managers. The Harold D. Craft Leadership Program is available by supervisor or unit manager nomination only, offering more advanced leadership concepts to high performing, high potential individuals. The capstone course called Leading Cornell is for high-performing individuals with senior leadership potential at a particular position and at least 1 year of service at Cornell. Candidates are nominated by deans or vice presidents in partnership with human resources, with consideration of qualifications including (a) senior leadership potential, (b) significant accomplishments, and (c) projected future contributions as identified by the nominator (Cornell University, 2012). The program is limited to 25 participants and runs parallel to the academic year. With a focus on application of learning to the workplace, the program description offers, “Participants will work on projects that are important to the university while at the same time developing practical leadership and management experiences, with the goal of preparing participants to fill key positions as openings occur” (Cornell University, 2012, para. 2). To illustrate, the 2010-2011 participants were asked to interview university leaders, faculty, and staff; review the 5-year business plan; and make recommendations to further the plan through presentations to the president and provost. Doolittle (2011) quoted an assistant dean, who said, “The strategic plan now needs to be translated at the unit level . . . so that people can see themselves in the plan” (para. 7). Success of the program is determined by combination of participants and project results. Participants offer feedback about the experience, including whether or not they would recommend participation by others. Senior leaders evaluate the success of the
  • 31. 22 program by the team project deliverables, whether they hold merit and provide real solutions to organizational issues (Uher & Johnson, 2011). University of Virginia. Founded in 1819 by Thomas Jefferson, the University of Virginia is a public institution located in Charlottesville, Virginia. Student enrollment as of 2011 was 24,297, with a faculty and staff headcount of 7,979 (University of Virginia, 2012a). In the 2012 U.S. News & World Report list of best colleges, the University of Virginia ranked second in the category of Top Public Schools and 25th out of 50 in the category of Best National Universities. In 2007, the university reconvened its Commission on the Future of the University to consider strategies to combat the uprising of world competition in higher education and to discern ways to distinguish the University of Virginia among other local, national, and international institutions. The Commission on the Future of the University (2008) offered, “Our strategy is to strengthen our core resources while strategically funding selected new efforts that will further distinguish the University” (p. 4). Strong and consistent leadership would be necessary to carry forth new initiatives, often achieved by training and development efforts. Recognizing this, University of Virginia President Casteen (2008) said, “This new generation of capable, visionary leaders will guide the University into the next decade and beyond” (para. 7). The Leadership Development Center is a division of the university Human Resources unit and focuses on development opportunities, tools, and strategies that support individual professional development. The Leadership Development Center provides a variety of programs based on the various stages of employee development. The programs include (a) Supervisory Essentials for newly appointed supervisors, (b)
  • 32. 23 Managing at the University of Virginia for all levels of supervisor positions, (c) Managing the University of Virginia Way for supervisors and managers with a minimum of 3 years of experience, (d) Leadership Strategies for more experienced managers, and (e) the Executive Onboarding Program to acclimate executives new to the University of Virginia. Leadership Strategies is considered a managerial development program, offered once per year in cohort format (University of Virginia, 2012b). Candidates must be nominated by a manager and must meet specific criteria: (a) hold director level position or above, (b) have 5 years or more of management experience, and (c) have multiple supervisees as well as a broad span of influence. The objectives of the program include individual development by use of comprehensive, 360-degree feedback; addressing of leadership issues with resident subject-matter experts; and completion of a project related to the university’s mission and strategies. In addition, cohorts are given private, interactive audiences with senior administrators from across campus to learn about issues specific to the administrators’ areas. For example, the 2012 cohort met with President Teresa Sullivan who discussed the vision for the University of Virginia. Other meeting topics included budget and financial concerns, student experience, 2012 legislative session and impacts to the University of Virginia, leadership transition and change management, as well as university goals and organizational alignment (University of Virginia, 2012b). Success of the program is measured by an increase in readiness and engagement of participants (Uher & Johnson, 2011). In addition, the program is expected to provide positive internal and external promotion. The key measurement of effectiveness is
  • 33. 24 resolution of institutional issues via action-learning projects. Duke University. Located in Durham, North Carolina, Duke University originated as Trinity College in 1838, later expanding into Duke University named for its major benefactor, James B. Duke, in 1924. Today the university encompasses 10 colleges and schools and the Duke University Health System. Student enrollment in 2011 was 14,746 students, and the faculty and staff headcount (including the Health System) was 34,366 (Duke University, 2012). In 2004, Duke University began an intensive planning period, culminating in a new strategic plan adopted by the Board of Trustees in 2006. Called Making a Difference, the Duke University (2006) plan aimed to enhance university academic excellence, while continuing to capitalize on the strengths of the university’s reputation in collaboration and connection through diversity (Burness, 2006). Six goals are intended to ensure the strategic plan meets the vision. 1. The first goal is to “increase the capacity of our faculty to develop and communicate disciplinary and interdisciplinary knowledge” (Duke University, 2006, p. 25). The overall strategic plan designates funds to support an increased effort at recruiting and retaining superior faculty. 2. “Strengthen the engagement of the university in real world issues” (Duke University, 2006, p. 33) is a recommitment to their specific strengths as a university and to build upon them in service to society. 3. The third goal is to “attract the best graduate and professional students and fully engage them in the creation and transmission of knowledge” (Duke University, 2006, p. 39). The university will focus more attention on inclusion and fostering sense of
  • 34. 25 community among graduate students. 4. The fourth goal is to “foster in undergraduate students a passion for learning and a commitment to making a difference in the world” (Duke University, 2006, p. 41). Through innovating teaching methods, students will be empowered to participate more in their own education, and connect with the community at large in the process. 5. “Transform the arts at Duke” (Duke University, 2006, p. 47) means enhancing programs and opportunities across disciplines. 6. The final goal is to “lead and innovate in the creation, management, and delivery of scholarly resources in support of teaching and research” (Duke University, 2006, p. 51). The university will provide enhanced support to the libraries and information technology in order to augment academic endeavors (Duke University, 2006). The strategic plan outlines clear definitions of success and assessment strategies. For example, for the first goal to increase faculty quality, Duke will strive to ensure 75% of faculty hires will be in fields of strategic importance. Along with this indicator, nine other assessment strategies will be employed to gauge the success of this one goal, with nine identified key expectations or outcomes. The latest iteration of the university’s mission statement was adopted by the Board of Trustees in 1994 and included the directive from founder James B. Duke that states members of the university are “to provide real leadership in the educational world” (Duke University Board of Trustees, 2001, para. 2) and “maintain a place of real leadership in all that we do” (para. 4). The overarching theme at Duke University is to demonstrate leadership into the next generation through the action items of the strategic
  • 35. 26 plan. To support that effort, Duke University Human Resources offers an array of training and professional development opportunities designed for different audiences. The Professional Development Institute offers a First Time Supervisors program. It is a highly selective program, with only 20 participants maximum. Consideration for the program is contingent on (a) manager nomination, (b) 3 years of full-time service at Duke University, (c) 2 consecutive years of meeting or exceeding performance expectations, and (d) acceptance of a retention agreement of 2 years following completion of the program. In addition, the candidate’s manager must identify a mentor, another manager or supervisor with at least 5 years of service at Duke University and willing to meet with mentees and attend sessions as required. The next stage offering of leadership development is the Duke Leadership Academy. Focused on leadership and behaviors to implement business strategies, this highly selective program requires nomination from a vice president or dean at Duke. Over 12 months, participants experience curriculum as a mix of classroom learning, exposure to senior leaders at Duke, individual and 360-degree assessments, coaching, and practical application of theory to current Duke University challenges. The program is based on best practices from the curriculum codeveloped by Duke’s Fuqua School of Business, Kenan Institute for Ethics, and University Athletics. This codeveloped program is referred to as the Fuqua/Coach K Center on Leadership and Ethics (COLE). The COLE curriculum is known for (a) developing leaders of consequence, (b) being a knowledge source, (c) being a community builder, and (d) providing a developmental portal to integrate students from knowledge to application (Fuqua/Coach K COLE, 2012).
  • 36. 27 Coach K, short for Coach Mike Krzyzewski, is recognized for his leadership and his ability to coach leadership from others. Coach K has been a successful basketball coach for over 30 years and has learned many lessons from interactions with so many different players over the years. In an interview with Sitkin and Hackman (2011), Coach K said that he never wants to force someone else to try to lead in his same style; he wants to help others learn what their style is. Similarly, Duke President Richard Brodhead wants to help emerging leaders at Duke find their style and how it can ultimately help Duke move forward as an organization. At the conclusion of the 3rd year of Duke Leadership Academy, Green (2012) captured a quote from President Brodhead to participants: In my job, leadership is about meeting everyone, listening to everyone, then putting their ideas back out to them in terms of the mission of Duke. . . . People want to hear you put their aspirations into words so they can act on them. (para. 3) If the definition of servant leadership is a leader who is attentive to the growth and development of those with whom they work, then the style of current leadership at Duke University appears to be servant leadership. Through the offering of the Duke Leadership Academy, senior administrators are demonstrating the spirit to cultivate servant leadership throughout the organization, and into the future, planning for the long- term success of Duke University. According to Uher and Johnson (2011), success of the Duke Leadership Academy is measured by increased engagement and commitment to Duke. It is also measured by strengthened leadership capabilities and the personal development plans created for each participant during the course of the program. The keys to success are the use of best practices from the COLE program, access to university and community leaders, and the opportunity to utilize new skills through action-learning projects.
  • 37. 28 Vanderbilt University. Founded in 1873, Vanderbilt University is an independent, private research university in Nashville, Tennessee. Known for research and development, Vanderbilt University secured just over $0.5 billion in Fiscal Year 2011, ranking 20th among U.S. colleges and universities in federal research funding and in the top 10 in National Institutes of Health research (Zeppos, 2012). Student enrollment 2011- 2012 was 12,859, a little more than half undergraduate (Vanderbilt University, 2012b). Vanderbilt was ranked 17th in the U.S. News & World Report (2012) Best Colleges category and ranked in 10 other categories. The total number of employees in Fiscal Year 2011, including the medical center, was 23,834, making Vanderbilt University the second largest employer in the state of Tennessee. As of 2012, six major initiatives at Vanderbilt were intended to move the university toward its vision: (a) enhanced financial aid, (b) College Halls at Vanderbilt, (c) graduate education, (d) international education, (e) research, and (f) energy and environment. The aim of enhanced financial aid was to remove financial barriers for students and replace need-based student loans with grant and scholarship assistance. This required Vanderbilt to reallocate funding where possible and attract new scholarship endowment, in the quest to strengthen the policy of admitting students based on merit rather than financial status (Vanderbilt University Office of the Chancellor, 2012). The College Halls initiative was a transformation of residential life where select faculty would live in apartments among the students to create a more bonded community and best experience for 1st-year students, in particular. Graduate education is strongly connected to research and thus a natural progression. Interrelated is international education. Vanderbilt endeavors to increase not only recruiting of international students but also
  • 38. 29 research relationships, collaborations, and funding opportunities from international sources. Even though Vanderbilt already has a strong reputation for research, there is still room to grow and garner more research dollars. The final initiative, energy and environment, involved university recycling and carpooling efforts, but more can be done. The university has begun green building efforts and wants to improve education and individual efforts at conservation. In terms of the university’s mission, goals, and values, Vanderbilt University statements are focused and succinct. Along with scholarly research, creative teaching and service to society, the quest for new knowledge will continue by virtue of the values of “open inquiry, equality, compassion and excellence in all endeavors” (Vanderbilt University, 2015, para. 2). In a state-of-the-university address, Chancellor Zeppos (2012) commended the university community for exercising good fiscal judgment, which has allowed Vanderbilt to continue to invest in its students and its own future. Although many peer universities were dealing with budgetary cuts and forced reduction in enrollment and staffing, Vanderbilt’s student enrollment increased, research funding increased, and alumni and generous benefactors increased their financial support over prior years. This abundance is allowing Vanderbilt to continue to reinvest in its own infrastructure and in talent development. The administration of Vanderbilt has been focusing on overarching themes as the backdrop to progress strategic priorities. The themes include One University, collaboration, and leadership excellence. Most universities are splintered, with schools and colleges within the university competing with each other for scarce resources. Vanderbilt, through a sense of community, prefers instead to focus on leveraging
  • 39. 30 synergies between colleges to get the most out of resources. The initiative of One University will take collaboration and leadership excellence to bring to fruition (Patterson, 2009). At a Faculty Senate meeting in 2012, Chancellor Zeppos said that the university would stand or fall based on the quality of its leaders (Vanderbilt University Faculty Senate, 2012). What follows is then investment in university talent, through training and development. To ensure consistency in support of the university mission and goals, the Organizational Effectiveness unit of Human Resources provides programs and courses in leadership development. At the entry level, the Human Resources Leadership Foundation Series includes five modules intended to help participants develop supervisory skills: (a) Attracting, Hiring, and Retaining New Staff; (b) Targeted Selection (for hiring managers only); (c) Developing and Coaching Staff; (d) Managing Performance and Behavior; and (e) Legal Issues (Vanderbilt University, 2012a). Modules can be taken in any order and with no specified time limitation but rather at the convenience of the individual. Course instructors are members of the Organizational Effectiveness team, Human Resources, and General Counsel. Other individual leadership or supervisory sessions are available, including topics such as becoming a leader, performance conversations, change and transition, project management, change and transition, and conflict. Leadership training is also provided under the heading of Health and Wellness for Faculty and Staff, including role change for newly promoted leaders, team leadership, and dealing with conflict as a leader. An offering at the highest level is the Leadership Academy. The Leadership Academy threads the concepts of One University, leadership excellence, and
  • 40. 31 collaboration through eight sessions spanning 6 months. There is a kick-off meeting, a final celebration meeting, and six sessions devoted to topics surrounding best practices in leadership. The sessions are offered in sequence, building on concepts in progression (Uher & Johnson, 2011). The program kick-off is an orientation session, which includes an overview of the program and the Vanderbilt Commitment. The Month 1 training session is a day-long session entitled Leading Self. This session includes four topics of focus: (a) leadership excellence at Vanderbilt, (b) the leadership journey, (c) growing self-awareness for effective leadership, and (d) self-development and the art of learning agility. These four topics are centered on understanding one’s self in terms of style and perceptions. The Month 2 session is a day-long continuation session of Leading Self, including the topics of (a) the power of influence and inspiration, (b) personal vitality, and (c) elements of leader presence. Building on the previous session, Month 2 explores the translation of personal leadership style to sphere of influence (Uher & Johnson, 2011). Month 3 begins a segment called Leading Others. The session includes a dinner event, and the topics are (a) orchestration, (b) bringing out the best in others, and (c) teachable point of view. These topics cover the art of organizing and inspiring individual contributions. Month 4 is a continuation of Leading Others, including the following topics: (a) real conversations, (b) tackling conflict, (c) building effective teams, (d) creating vital teams, and (e) individual contributor to a manager. This day-long session helps leaders to create teams and address inevitable conflict (Uher & Johnson, 2011). Month 5 begins the segment called Leading the Institution. The day-long session covers topics of (a) institutional success, (b) leadership networking (over cocktails and
  • 41. 32 dinner), (c) the business of higher education, and (d) problem resolution. This session introduces issues and concerns of importance to the organization and what to do about them (Uher & Johnson, 2011). The Month 6 session is 2 days and includes (a) the student or patient experience, (b) the business of higher education, (c) high-level organizational structure, and (d) the leadership challenge. This session brings the focus to the highest level of oversight, understanding the business in general and the relationship to Vanderbilt University. The final session is a graduation celebration, also covering the topic of community connection. This session celebrates the successful completion of the program by participants and introduces community involvement for consideration (Uher & Johnson, 2011). Throughout the program, a formal accountability and assessment process is capturing data to determine the effectiveness of the program. Participants are given prework, assignments and action-learning projects that are monitored for rates of completion and degree of accuracy. After-learning technology is enabled, which is the dissemination of online postsession testing. In addition to data collection, group debriefings are held. Coaching and mentoring are provided to help solidify information and help participants to shape new skills. Finally, 360-degree assessments are administered, including peers, coaches, and senior administrators, to determine whether leaders have begun to demonstrate the level of leadership this program was designed to instill (Uher & Johnson, 2011). Summary of the Literature Among the many models of program evaluation, consensus in the literature has
  • 42. 33 been that the Kirkpatrick model appears to be the most common option for evaluating employee training, especially if flexibility and future replication of evaluation processes are considerations. The Kirkpatrick model is commonly used because of the straightforwardness and agreement of the four levels. Training outcomes must include learning and behavioral aspects. Regarding reactions, D. Kirkpatrick and Kirkpatrick (2007) underscored the importance of measuring satisfaction of participants, because future training depends on positive reactions not just of participants but of their managers as well. Given that so many organizations do go through the effort of reaction surveys (Cohen, 2005), this is a widely embraced concept. From the review of benchmark universities in leadership development (Uher & Johnson, 2011), three main components of successful programs were shared in common among the identified organizations. The first is clearly defined expectations of the outcomes of the program. One expectation is to confirm those individuals who are truly potential leaders, not just high performers, to help the stock the pipeline of future leaders. The second component is to utilize the program to have these high-potential high performers resolve organizational issues through action-learning activities in the program. The use of action-learning activities makes it much easier to identify the ROI of the program. The third component of benchmark leadership-development programs is the differentiation in the program between leadership and managing. All six universities discussed have separate supervising, managing, emerging leaders, and leadership programs. For leadership, this helps keep the content focused on high-level oversight unique to the executive level.
  • 43. 34 Research Questions Within the framework of Kirkpatrick’s four levels of evaluation, the research questions in support of this study reflect the four levels of reaction, learning, behavior, and results. At Level 1, two questions were asked. Research Question 1 was the following: What outcomes are the stakeholders anticipating as a result of supervisors participating in the university’s leadership training program? These expectations were likely to be reflective of the universities vision and strategic priorities. Research Question 2 was the following: Did participants in the training program react favorably enough to recommend training to others? The expectation was that content would be considered useful by participants, so that they would recommend training to their subordinates. The Level 2 question was Research Question 3: Did participants understand and retain the desired learning outcomes, as specified by the training module content measured by pre- and posttraining testing? The expectation was that the information would be retained. The Level 3 question was Research Question 4: Are participants observed by their managers applying learning outcomes in the work environment? The Level 4 question was Research Question 5: Is the training having a measureable impact on the organization and meeting stakeholder expectations? If stakeholder expectations are aligned with the organization’s strategic priorities, the training can be molded according to goals as they adjust over time.
  • 44. 35 Chapter 3: Methodology Program Training was evaluated at a not-for-profit, private university headquartered in the southeastern United States. The university was chartered in 1964, founded by educators who had innovative ideas for providing higher education opportunities to students in physical and social sciences via long distance, challenging the convention of brick-and- mortar establishments long before personal computing took hold. In the next 48 years, the university would grow to 18 colleges and schools, offering 144 programs in undergraduate, graduate, and first-professional degrees. At the time of this study, student enrollment was approximately 28,000, locally and at sites in 23 states, making the university the eighth largest not-for-profit, independent university in the United States. The headcount of supporting faculty, staff, and administration was just over 4,000 as of 2012. With a new president at the helm, the university has an aggressive new organizational vision and 10-year business plan, which requires mobilization of the university’s workforce. Bacharach (2007) stated, “Leadership is about coming up with a viable agenda, getting people behind your initiative, and sustaining momentum so people will stay on your side and bring ideas to fruition” (p. 14). Leadership plays a pivotal role in marshalling an organization’s workforce; therefore, by extension leadership development is a critical component as well. The university leadership-development training is five competency-based modules intended to provide participants with enough knowledge and overview of skills to help them competently perform in their roles and encourage competency from the individuals reporting directly to them (direct reports). The five modules are (a)
  • 45. 36 Communication in the Workplace, (b) Managing Conflict and Change, (c) Performance Management, (d) Project Management and Measurement, and (e) Visioning and Planning. The Communication in the Workplace module encompasses facets of effective interpersonal interactions to address clarity in verbal and written forms and skills in listening and collaboration. Topics include active listening, verbal and written communication, barriers to effective communication, and interpersonal skills to enhance communication in the workplace. The module entitled Managing Conflict and Change addresses desired and undesired effects of change. Facilitators discuss the critical role of conflict in change efforts, as well as components of an effective change plan, and offer role-playing exercises to practice reframing techniques. The Performance Management module centers on interviewing strategies, employee motivation and recognition, delegation, coaching, and feedback. Participants are provided with strategies to craft interview questions using behavior to predict future performance. Facilitators also provide an overview of motivation strategies for enhanced employee performance as well as strategies to recognize and reward employees. In the Project Management and Measurement module, participants are provided the steps in project planning and risk analysis. Facilitators present the steps in project scheduling, including detailed benchmarks and time lines. Participants are expected to identify methods to document activities related to projects and discuss a variety of metrics for measuring change stemming from project implementation. The Visioning and Planning module provides an overview of leadership theories
  • 46. 37 and processes. Facilitators outline the steps involved in strategic planning, best practices in team building, and factors involved in effective decision making. Participants The target population for this study included all participants in the leadership- development training program at the study university. This population included supervisors, managers, directors, and every level of administrator who managed the activities of three or more direct reports at the university. University supervisors represented 18 colleges, schools, and centers as well as 14 nonacademic support centers. Of the 650 supervisory positions at the university in 2012, when the leadership- development training program was initiated, 407 (63%) were identified as having three or more direct reports and thus were required to participate. Supervisors who had fewer than three direct reports were required to attend training after the first group completed training. Training participants were surveyed for knowledge pre- and postintervention. They were also surveyed postintervention regarding the level of their satisfaction with, and reaction to, the training. They were surveyed to assess meeting of executive expectations postintervention as well. Participants in the evaluation of the leadership-development training program included training participants, supervisors of the training participants, and the direct reports of the training participants. A 360-degree approach to the evaluation, involving surveying managers and subordinates of training participants, attempted to overcome perception distortion by allowing for a variety of sources (D. Kirkpatrick & Kirkpatrick, 2007).
  • 47. 38 Evaluation Model The evaluation framework model used for this study was Kirkpatrick’s four levels of evaluation (see Figure 1). The four levels are reaction, learning, behavior, and results. Figure 1. Kirkpatrick’s four levels of evaluation grouped by individual or organizational influence. Level 1, reaction, refers to participants’ postintervention level of satisfaction with the training, including satisfaction with content, satisfaction with content for supervisors, perceived competency of the facilitator, length of time devoted to the training, and helpfulness of materials provided. Reaction adds to the “chain of evidence” (D. Kirkpatrick & Kirkpatrick, 2007, p. 123) that builds through the four levels and ultimately provides data to decide on the effectiveness of the intervention. D. Kirkpatrick and Kirkpatrick (2007) indicated that the degree to which participants find the intervention relevant is critical to actual learning and later application of the learning on the job. Level 2, learning, measures whether participants acquired new (or increased) knowledge. For this study, learning was determined by comparing pre- and postintervention test results. Test content was developed from the learning objectives of the leadership-development training modules. Statistical analysis of the net scores, the Individual Organizational Level 1: Reaction Level 2: Learning Level 3: Behavior Level 4: Results Assesses satisfaction with  Content  Facilitator  Time  Relevancy Measures:  Knowledge  Statistical Comparison of Pre- and Postsurveys 360-degree surveys:  Supervisor  Direct Reports Statistical comparison between measurements and expected outcomes
  • 48. 39 difference between the pre- and posttests, tested for significance of change correlated to the intervention. According to D. Kirkpatrick and Kirkpatrick (2007), surveys and questionnaires are two methods of evaluating application of new behaviors on the job to obtain Level 3 data posttraining. D. Kirkpatrick and Kirkpatrick (2007) recommended 360-degree evaluations, soliciting feedback from superiors and subordinates of training participants to identify whether new behaviors introduced during training are demonstrated on the job. Feedback is provided in a confidential manner, to avoid conflicts and promote trust in the process (Demirkaya, 2007). For this study, a 360-degree evaluation was employed, through which supervisors and direct reports were surveyed using online questionnaires, which allowed anonymous responses. The survey participants were asked if new or increased behaviors for skills related to specific training modules were being observed. Level 4, results, assesses workplace metrics such as engagement, turnover, and other such measurements compared against a baseline assessment to determine whether training can be correlated to improvement in individual and organizational performance. For this study, results were to be evaluated through analysis of actual outcomes to the expected outcomes articulated by university executives. Statistically significant changes toward expected outcomes were to be the determining factor as to whether the training intervention could be considered effective or not. Both summative and formative, the outcomes would inform the training coordinator where and how the program might require modification. Instruments Leadership training participants were surveyed at different times in the training
  • 49. 40 cycle in accordance with the first three of Kirkpatrick’s four levels of evaluation. Level 1, reaction, was captured by survey of participants’ overall reaction to the training immediately upon completion of the individual sessions (Appendix D). Level 2, learning, was captured by pre- and postintervention knowledge testing at intervals of 1 week prior to the training and 1 week after the training. Survey items were structured as multiple choice or true–false, based on the content of each training module (Appendix E). Testing before training could help facilitators understand the level of knowledge participants had prior to entering training and therefore update the content accordingly to information that would be considered new. In addition, testing 1 week after the intervention could help identify what new information was understood, as well as what information actually stayed with participants into the near future. Level 3, behavior, was measured through surveys administered to participants’ supervisors and subordinates several weeks after completion of the entire curriculum of training (Appendix F). The survey included Likert-scale rated questions intended to determine if participants were displaying new or increased behaviors on the job postintervention. Level 4, overall results, was intended to compare organizational baseline data obtained prior to the initiation of training, such as data from employee engagement surveys and employee data, to postintervention data. Procedures Design. Survey items for Levels 1 and 3 were based on 5-point Likert-scale ratings intended to measure reaction to content, relevance to participants’ jobs, facilitator effectiveness, and materials provided. Survey items associated with Level 2 included multiple-choice and true–false questions. Responses were intended to be in the form of
  • 50. 41 quantifiable metrics for ease of comparison for statistical analyses. Data collection procedures. The ObjectPlanet Opinio online survey software application was used for collecting Levels 1–3 survey data from training and nontraining participants (such as supervisors and direct reports of training participants). Level 4 data were to be collected from institutional data provided from university records. Employee engagement data would be provided from the Office of Institutional Effectiveness. Other employee metrics, such as data regarding turnover rates and the number and nature of employee relations issues, would be provided by the Office of Human Resources. Data analysis. Nonparametric analysis was used for comparisons of Likert-scale questions. Chi-square tests of independence were used for a majority of the statistical comparisons. For each question on the pre- and posttraining learning surveys, the number of correct and incorrect responses for all participants was recorded in a 2 x 2 table as shown in Table 1. Table 1 Model of Comparison Analysis for Level 2 Pre- and Posttraining Knowledge Testing Question 1 Pretraining survey Posttraining survey Totals Correct responses A B A + B Incorrect responses C D C + D Total A + C B + D N From Table 1, the example of potential number of correct responses from all participants in the pretraining survey is given by A. A + C sums to the total number of participants for the pretraining survey (assessing knowledge before the training). Similarly, B + D represents the total number of participants for the posttraining survey (assessing knowledge after the training). N is the total number of surveys administered.
  • 51. 42 For example, if 100 subjects participated in the pretraining survey, and the same 100 subjects participated in the posttraining survey, the total N would be 200, which would represent the total number of surveys, not the total number of participants (still 100). The null hypothesis established for the study was as follows: There is not a statistically significant difference at the .05 level in the pretraining and posttraining knowledge of participants (i.e., learning is not correlated with training). Examples of response distributions. Table 2 represents an example distribution of responses to Item 1 of the learning surveys (pre- or posttraining, as the questions are identical). In this example, the chi-square statistic (χ2 = 0.57, with 1 degree of freedom) would not be significant at the .05 level (p = .4503), which indicates that the null hypothesis could not be rejected. There would be no evidence of a statistically significant correlation between learning and training. Table 2 Example of Level 2 Learning Survey Responses Indicating No Evidence of Learning Item 1 Pretraining survey Posttraining survey Totals Correct responses 30 35 65 Incorrect responses 70 65 135 Total 100 100 200 Table 3 represents a similar example distribution of responses to Item 1 of the learning surveys (pre- or posttraining, as the questions are identical). However, in this example, the chi-square statistic (χ2 = 4.8, with 1 degree of freedom) would be significant at the .05 level (p = .0285), indicating the null hypothesis could be rejected. This would be evidence of a statistically significant correlation between learning and training for Item 1.
  • 52. 43 Table 3 Example of Level 2 Learning Survey Responses Indicating Evidence of Learning Item 1 Pretraining survey Posttraining survey Totals Correct responses 30 45 75 Incorrect responses 70 55 125 Total 100 100 200 Logistic regression performed on the example in Table 3 yielded the same chi- square statistic of 4.8 with 1 degree of freedom and p = .0285, indicating the model is significant at the .05 level. The null hypothesis would be rejected, and evidence would suggest a correlation between participation in training and receiving a correct response for Item 1. In addition, logistic regression would produce an odds ratio equal to 1.909, indicating the odds of receiving a correct response for Item 1 increase by 91% when taking the training.
  • 53. 44 Chapter 4: Results Boaden (2006) contended that evaluation processes are considered an instrumental part of any development program. After 2 years and 33 training sessions held as part of the leadership-development program at the study university, considerable data have been collected using the framework of Kirkpatrick’s four levels of evaluation. Each level of evaluation corresponds to specific research questions and surveys outlined in this study. Population and Data Collection The participants of the leadership-development training were employees of the university who supervised or managed the activities of three or more direct reports. There were 407 supervisors identified to participate; at the time of this study, 211 completed all five modules. Surveys were administered to participants of the individual sessions of each module, which occurred six and seven times during the 2-year period from November 2012 through June 2014 (see Table 4). Table 4 Training Intervention Session Dates by Module Year Project Management & Measurement Visioning & Planning Performance Management Managing Conflict & Change Communication in the Workplace 2012 Nov. 16 Dec. 7 Nov. 30 2013 Jan. 18 Feb. 1 Mar. 15 Feb. 22 Jan. 11 June 14 May 24 June 7 May 31 Mar. 8 Oct. 25 Oct. 4 Oct. 11 Oct. 18 May 17 Nov. 22 Nov. 15 Dec. 6 Dec. 13 2014 Feb. 28 Feb. 7 Mar. 14 Feb. 14 Jan. 17 May 23 June 6 June 20 May 9 Mar. 21 May 16
  • 54. 45 All training participants were invited to take an online pretest the week before each session and a posttest the week after, as well as an online reaction survey the day the session was attended. Across all sessions, the attendance rate was approximately 75% of registered participants, and the average survey participation rate was 35%. These surveys constituted Level 1 and Level 2 of Kirkpatrick’s four levels of evaluation. Behavioral observation by supervisors and direct reports of training participants, Level 3 participation, was requested only of those participants who completed all five modules. Of the 211 participants who completed the training program, 28 granted permission to survey their supervisors and direct reports. The 28 who granted permission yielded a survey population of 183 direct reports and 23 supervisors. Of those, 85 direct reports (46% participation rate) and 13 supervisors (56% participation rate) responded to the behavioral observation survey (Appendix F). Research Question 1 was addressed through an interview survey of stakeholder expectations (Appendix B). Research Question 2 was addressed through the Level 1 reaction survey (Appendix D). Research Question 3 was addressed through the Level 2 knowledge pre- and posttraining tests (Appendix E). Research Question 4 was addressed through the Level 3 survey of behavior (Appendix F). Each research question has corresponding surveys and response sets for each module of the training intervention aggregated from each session of each module. Response sets were collected through online surveys for each session of each training module. Data cleaning. Before performing statistical analyses, null observations were identified. Null observations occurred when a survey response had a “completed” response set status and date stamp, but no answers were chosen for any of the questions.