Training Evaluation
BHMS4424 Career Planning and Employee Development
Training evaluation is essential; it not only measures
the effectiveness of learning but also ensures that
training investments lead to real-world improvements
and organizational success.
Session Outline
01
Types of
evaluation.
02
Importance of evaluation.
03
Models for evaluation.
04
Threats to validity.
05
Types of evaluation
design.
06
Factors influencing the type
of evaluation design.
Intended Learning
Outcomes
By the end of this session, you will be able to:
• identify and differentiate between various types of training
evaluation designs, including formative and summative evaluations,
and explain when each type is appropriate.
• describe the Kirkpatrick’s Model and Phillips Model, including its
levels of evaluation, and analyze how each level contributes to
understanding the effectiveness and ROI of training programs.
• discuss the potential threats to the validity of training evaluations,
including internal and external validity, and evaluate methods to
control these threats to ensure accurate assessment of training
outcomes.
The evaluation of a training program is the systematic
process of assessing the effectiveness and impact of the
training on participants and the organization.
It involves collecting and analyzing data to determine
whether the training objectives have been met and to
identify areas for improvement.
Types of Evaluation:
• Formative Evaluation
• Summative Evaluation
Types of
Evaluation
Formative Evaluation
• Happens while creating and developing the
program.
• Ensures the training is organized and runs well.
• Checks if trainees are learning and feeling
satisfied.
• Focuses on ways to improve the program.
• Involves gathering feedback about the program.
• Examples:
• Quizzes
• Talking in class
• Creating diagrams or charts
• Homework or classwork
• Exit surveys
Summative Evaluation
• Measures how much trainees have improved
after the training.
• Can be thought of as helping to validate and
‘check’ formative assessment.
• Involves collecting numerical data to evaluate
outcomes.
• Examples:
• End-of-year assessments
• Midterms or end-of-semester exams
• End-of-semester portfolios
1. Identifies the strengths and weaknesses of the
program.
2. Assesses if the program’s content and
organization help with learning and applying
skills on the job.
3. Determines which trainees gained the most or
least from the training.
4. Collects data to help market training programs.
5. Evaluates the financial benefits and costs of the
program.
6. Compares the costs and benefits of: Training
versus no training.
7. Different training programs to find the best
option.
Importance
of Evaluation
• One of the most commonly used methodology
models of L&D.
• Developed by Donald Kirkpatrick in the 1950s.
• Four levels of evaluation:
1. L1 Reaction: This level measures how
learners feel about the training program.
2. L2 Learning: This level measures how much
learners have learned from the program.
3. L3 Behavior: This level measures whether
learners have changed their behavior due to
the program.
4. L4 Results: This level measures the impact
of the program on the organization’s results.
Kirkpatrick’s
Model
• It is similar to the Kirkpatrick Model but has a
Level 5 measuring the return on investment
(ROI) of the training program.
• The Level 5 ROI measurement uses data
from L2 Learning, L3 Behavior and L4 Results
to create a model for finding out what
monetary returns the organization is actually
getting back from the training dollars spent.
ROI(%) = (Net benefits of the training program /
Total program cost) x 100
Phillips’ Model
by Jack Phillips
• Factors that can make evaluators doubt the
evaluation results.
• Internal Validity: Questions the
trustworthiness of the results.
• External Validity: Questions whether the
results can apply to other groups of trainees
or different situations.
Evaluation Design:
Threat to Validity
• Pretests and Post-tests: Compare
measurements taken before and after training to
see how much trainees have improved.
• Use of Comparison Groups: Include a group of
employees who do not attend the training but
participate in the evaluation to provide a
benchmark.
• Random Assignment: Assign employees to
either the training group or the comparison
group randomly to ensure fairness.
Methods to
Control Threat of
Validity
• Post-test Only: Outcomes are measured
only after training. This is suitable when
trainees are expected to start with similar
knowledge or skills.
• Pretest/Post-test: Measures are taken
before and after training. This is useful for
companies that want to evaluate training
without excluding any employees.
• Pretest/Post-test with Comparison Group:
Includes both trainees and a comparison
group. This design analyzes differences
between groups to see if training caused any
Types of
Evaluation
Design
• Time Series: Outcomes are measured at
regular intervals before and after training.
This helps analyze how training results hold
up over time.
Solomon Four-Group: Combines both
pretest/post-test and post-test-only designs.
This method helps control many threats to
internal and external validity.
Types of
Evaluation
Design (cont’d)
Comparison of Evaluation
Design
Factors Influencing the Type of
Evaluation Design
Further Readings
01
Noe, R. A. (2023). Employee training & development.
(9th ed.). McGraw-Hill. – Ch. 6

Training Evaluation methods for employee training

  • 1.
    Training Evaluation BHMS4424 CareerPlanning and Employee Development Training evaluation is essential; it not only measures the effectiveness of learning but also ensures that training investments lead to real-world improvements and organizational success.
  • 2.
    Session Outline 01 Types of evaluation. 02 Importanceof evaluation. 03 Models for evaluation. 04 Threats to validity. 05 Types of evaluation design. 06 Factors influencing the type of evaluation design.
  • 3.
    Intended Learning Outcomes By theend of this session, you will be able to: • identify and differentiate between various types of training evaluation designs, including formative and summative evaluations, and explain when each type is appropriate. • describe the Kirkpatrick’s Model and Phillips Model, including its levels of evaluation, and analyze how each level contributes to understanding the effectiveness and ROI of training programs. • discuss the potential threats to the validity of training evaluations, including internal and external validity, and evaluate methods to control these threats to ensure accurate assessment of training outcomes.
  • 4.
    The evaluation ofa training program is the systematic process of assessing the effectiveness and impact of the training on participants and the organization. It involves collecting and analyzing data to determine whether the training objectives have been met and to identify areas for improvement. Types of Evaluation: • Formative Evaluation • Summative Evaluation Types of Evaluation
  • 5.
    Formative Evaluation • Happenswhile creating and developing the program. • Ensures the training is organized and runs well. • Checks if trainees are learning and feeling satisfied. • Focuses on ways to improve the program. • Involves gathering feedback about the program. • Examples: • Quizzes • Talking in class • Creating diagrams or charts • Homework or classwork • Exit surveys
  • 6.
    Summative Evaluation • Measureshow much trainees have improved after the training. • Can be thought of as helping to validate and ‘check’ formative assessment. • Involves collecting numerical data to evaluate outcomes. • Examples: • End-of-year assessments • Midterms or end-of-semester exams • End-of-semester portfolios
  • 7.
    1. Identifies thestrengths and weaknesses of the program. 2. Assesses if the program’s content and organization help with learning and applying skills on the job. 3. Determines which trainees gained the most or least from the training. 4. Collects data to help market training programs. 5. Evaluates the financial benefits and costs of the program. 6. Compares the costs and benefits of: Training versus no training. 7. Different training programs to find the best option. Importance of Evaluation
  • 8.
    • One ofthe most commonly used methodology models of L&D. • Developed by Donald Kirkpatrick in the 1950s. • Four levels of evaluation: 1. L1 Reaction: This level measures how learners feel about the training program. 2. L2 Learning: This level measures how much learners have learned from the program. 3. L3 Behavior: This level measures whether learners have changed their behavior due to the program. 4. L4 Results: This level measures the impact of the program on the organization’s results. Kirkpatrick’s Model
  • 9.
    • It issimilar to the Kirkpatrick Model but has a Level 5 measuring the return on investment (ROI) of the training program. • The Level 5 ROI measurement uses data from L2 Learning, L3 Behavior and L4 Results to create a model for finding out what monetary returns the organization is actually getting back from the training dollars spent. ROI(%) = (Net benefits of the training program / Total program cost) x 100 Phillips’ Model by Jack Phillips
  • 11.
    • Factors thatcan make evaluators doubt the evaluation results. • Internal Validity: Questions the trustworthiness of the results. • External Validity: Questions whether the results can apply to other groups of trainees or different situations. Evaluation Design: Threat to Validity
  • 12.
    • Pretests andPost-tests: Compare measurements taken before and after training to see how much trainees have improved. • Use of Comparison Groups: Include a group of employees who do not attend the training but participate in the evaluation to provide a benchmark. • Random Assignment: Assign employees to either the training group or the comparison group randomly to ensure fairness. Methods to Control Threat of Validity
  • 13.
    • Post-test Only:Outcomes are measured only after training. This is suitable when trainees are expected to start with similar knowledge or skills. • Pretest/Post-test: Measures are taken before and after training. This is useful for companies that want to evaluate training without excluding any employees. • Pretest/Post-test with Comparison Group: Includes both trainees and a comparison group. This design analyzes differences between groups to see if training caused any Types of Evaluation Design
  • 14.
    • Time Series:Outcomes are measured at regular intervals before and after training. This helps analyze how training results hold up over time. Solomon Four-Group: Combines both pretest/post-test and post-test-only designs. This method helps control many threats to internal and external validity. Types of Evaluation Design (cont’d)
  • 15.
  • 16.
    Factors Influencing theType of Evaluation Design
  • 17.
    Further Readings 01 Noe, R.A. (2023). Employee training & development. (9th ed.). McGraw-Hill. – Ch. 6