Designing and Conducting
Summative Evaluations
Terrell McCall
10.20.19
Background
 Attempts to improve student achievement by tinkering with this or that component
of a course can be frustrating, often leading an instructor or course manager to
explain low performance as a student problem. The instructor, learners, materials,
instructional activities, delivery system, and learning and performance
environments interact and work with each other to bring about desired student
learning outcomes. Changes in one component can directly affect one another, The
instructional process can be viewed as a system whose purpose is to bring about
learning.
Objectives
 Describe the purpose of summative evaluations
 Describe the two phases of summative evaluation and the decisions resulting from
each phase
 Design an expert judgement phase of summative evaluation
 Design an impact phase of summative evaluation
 Contrast formative and summative evaluation by purpose and design
Expert Judgement Phase of Summative
Evaluation
 The purpose for the congruence analysis is to examine the congruence between
the organization’s stated needs and the instructional materials.
The closer training is aligned to the organization’s goals and needs.
The closer the tasks in the job analysis description and the current organization’s
needs match the goals and objectives of the instruction, the more likely learners will
achieve the skills and transfer them to the worksite.
Resources
 Materials that are too costly, how effective, often run out of budget for
maintenance by an organization.
 The facilities and equipment available in the organization and those required to
implement the instruction should also be contrasted. The information gathered
from your congruence analysis should be shared with the appropriate decision
makers.
Content Analysis
 One strategy is to provide the experts with copies of all materials and ask them to
judge the accuracy, currency, and completeness of the materials for the
organization’s stated goals.
 Another strategy is to obtain the design documents from the group that produced
the instruction and the expert to use them as a standard against which to evaluate
the accuracy and completeness of the instructional materials.
Design Analysis
 The evaluator should judge the abundancy of the components of the instructional
strategy included in the materials. As an external evaluator, you may not know
whether the materials are adequate for the given learners’ needs, but you should
take steps to find out about the learners’ characteristics.
Transfer Feasibility Analysis
 The evaluator must also know whether critical aspects of the job were adequately
simulated in the learning context; other considerations, such as supervisor
capabilities, equipment, or environments, are also important to examine.
 The potential for transfer within such cultures is slim. Evaluators can request
posttest data from the organization providing the instruction to determine whether
the skills were actually learned.
Existing Materials Analysis
 The proliferation of e-learning over the past few years has many organizations
scrambling for quality instructional materials for this delivery format.
 The rapid expansion of the field without matching expansion in graduate programs
preparing instructional designers means that many employed designers today lack
advanced degrees.
 This potentially means additional work for external evaluators who can use
instructional design materials
Impact Phase of Summative Evaluation
 Impact analysis is conducted within the organization.
 Sometimes called outcomes analysis, impact analysis typically includes the
following parts: focusing study, establishing criteria and data needs, selecting
respondents, planning study procedures, summarizing and analyzing data,
reporting results, and negotiating resources.
Focusing Impact Study
 The first planning activity is to center your study in the workplace. The evaluator
must shift from the perspective of the instruction to a perspective on the
organization.
 Review the organization’s goals
 Your question should yield information for the impact analysis.
 Your initial contact can sink a study if company personnel are not approached
appropriately
Est
Establishing Criteria and Data
 The criteria and data in the performance site vary from one context to another
 Data gathering methods depend on the resourced available for the study
Selecting the Respondents
 The nature of information you need and the particular questions assist you in
planning the types and number of persons who participate in your study
 Learners/employees have insight into whether and how they use the skills, and if
not, why.
 Peers and subordinates of the learners selected may also offer insights into the
effectiveness of instruction
Planning Study Procedures
 Consider whether you need preceding posttest data from the training organization
 Decisions about how to collect data include issues of sampling, data collection, and
data analysis.
 Data collection procedures depend on the type of data you have decided to
collect.
Comparison of Formative and Summative
Evaluations
 Formative and summative evaluations differ in several aspects
 Formative evaluations are undertaken to locate weaknesses and problems in the
instruction in order to revise it
 The formative evaluation includes three stages:
 One-to-one, small group, and field trial
Continued
 The impact evaluation stage is conducted with target learners after they have
returned to their jobs and focuses on the jobsite
 The materials subjected to formative evaluations typically have different
developmental histories
Continued
 The final difference between the two evaluations is the outcome.
 The results of a formative evaluation include prescriptions for revising the
instruction and the actual materials revisions among the three stages of the
evaluation.
Summary
 Summative evaluations are conducted to make decisions about whether to
maintain, adopt, or adapt instruction. The primary evaluator in a summative
evaluation is rarely the designer or developer of the instruction. Instructional
designers make excellent summative evaluators because of their understanding of
well-designed. The design of the expert judgement phase of summative evaluation
is anchored in the model for systematically designing instruction.
Contact
 Email: terrell2@bellsouth.net
 LinkedIn: Terrell McCall

Chapter 12

  • 1.
    Designing and Conducting SummativeEvaluations Terrell McCall 10.20.19
  • 2.
    Background  Attempts toimprove student achievement by tinkering with this or that component of a course can be frustrating, often leading an instructor or course manager to explain low performance as a student problem. The instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact and work with each other to bring about desired student learning outcomes. Changes in one component can directly affect one another, The instructional process can be viewed as a system whose purpose is to bring about learning.
  • 3.
    Objectives  Describe thepurpose of summative evaluations  Describe the two phases of summative evaluation and the decisions resulting from each phase  Design an expert judgement phase of summative evaluation  Design an impact phase of summative evaluation  Contrast formative and summative evaluation by purpose and design
  • 4.
    Expert Judgement Phaseof Summative Evaluation  The purpose for the congruence analysis is to examine the congruence between the organization’s stated needs and the instructional materials. The closer training is aligned to the organization’s goals and needs. The closer the tasks in the job analysis description and the current organization’s needs match the goals and objectives of the instruction, the more likely learners will achieve the skills and transfer them to the worksite.
  • 5.
    Resources  Materials thatare too costly, how effective, often run out of budget for maintenance by an organization.  The facilities and equipment available in the organization and those required to implement the instruction should also be contrasted. The information gathered from your congruence analysis should be shared with the appropriate decision makers.
  • 6.
    Content Analysis  Onestrategy is to provide the experts with copies of all materials and ask them to judge the accuracy, currency, and completeness of the materials for the organization’s stated goals.  Another strategy is to obtain the design documents from the group that produced the instruction and the expert to use them as a standard against which to evaluate the accuracy and completeness of the instructional materials.
  • 7.
    Design Analysis  Theevaluator should judge the abundancy of the components of the instructional strategy included in the materials. As an external evaluator, you may not know whether the materials are adequate for the given learners’ needs, but you should take steps to find out about the learners’ characteristics.
  • 8.
    Transfer Feasibility Analysis The evaluator must also know whether critical aspects of the job were adequately simulated in the learning context; other considerations, such as supervisor capabilities, equipment, or environments, are also important to examine.  The potential for transfer within such cultures is slim. Evaluators can request posttest data from the organization providing the instruction to determine whether the skills were actually learned.
  • 9.
    Existing Materials Analysis The proliferation of e-learning over the past few years has many organizations scrambling for quality instructional materials for this delivery format.  The rapid expansion of the field without matching expansion in graduate programs preparing instructional designers means that many employed designers today lack advanced degrees.  This potentially means additional work for external evaluators who can use instructional design materials
  • 10.
    Impact Phase ofSummative Evaluation  Impact analysis is conducted within the organization.  Sometimes called outcomes analysis, impact analysis typically includes the following parts: focusing study, establishing criteria and data needs, selecting respondents, planning study procedures, summarizing and analyzing data, reporting results, and negotiating resources.
  • 11.
    Focusing Impact Study The first planning activity is to center your study in the workplace. The evaluator must shift from the perspective of the instruction to a perspective on the organization.  Review the organization’s goals  Your question should yield information for the impact analysis.  Your initial contact can sink a study if company personnel are not approached appropriately
  • 12.
  • 13.
    Establishing Criteria andData  The criteria and data in the performance site vary from one context to another  Data gathering methods depend on the resourced available for the study
  • 14.
    Selecting the Respondents The nature of information you need and the particular questions assist you in planning the types and number of persons who participate in your study  Learners/employees have insight into whether and how they use the skills, and if not, why.  Peers and subordinates of the learners selected may also offer insights into the effectiveness of instruction
  • 15.
    Planning Study Procedures Consider whether you need preceding posttest data from the training organization  Decisions about how to collect data include issues of sampling, data collection, and data analysis.  Data collection procedures depend on the type of data you have decided to collect.
  • 16.
    Comparison of Formativeand Summative Evaluations  Formative and summative evaluations differ in several aspects  Formative evaluations are undertaken to locate weaknesses and problems in the instruction in order to revise it  The formative evaluation includes three stages:  One-to-one, small group, and field trial
  • 17.
    Continued  The impactevaluation stage is conducted with target learners after they have returned to their jobs and focuses on the jobsite  The materials subjected to formative evaluations typically have different developmental histories
  • 18.
    Continued  The finaldifference between the two evaluations is the outcome.  The results of a formative evaluation include prescriptions for revising the instruction and the actual materials revisions among the three stages of the evaluation.
  • 19.
    Summary  Summative evaluationsare conducted to make decisions about whether to maintain, adopt, or adapt instruction. The primary evaluator in a summative evaluation is rarely the designer or developer of the instruction. Instructional designers make excellent summative evaluators because of their understanding of well-designed. The design of the expert judgement phase of summative evaluation is anchored in the model for systematically designing instruction.
  • 20.