Designing and Conducting
Summative Evaluations
Alyssa Singleterry
Background
 Attempts to improve student achievement by tinkering
with this or that component of a course can be frustrating,
often leading an instructor or course manager to explain
low performance as a student problem. The instructor,
learners, materials, instructional activities, delivery
system, and learning and performance environments
interact and work with each other to bring about desired
student learning outcomes. Changes in one components
can directly affect one another. The instructional process
can be viewed as a system whose purpose is to bring
about learning.
Objectives
 Describe the purpose of summative evaluation
 Describe the two phases of summative evaluation and the
decisions resulting from each phase
 Design an expert judgement phase of summative
evaluation
 Design an impact phase of summative evaluation
 Contrast formative and summative evaluation by purpose
and design
Expert
Judgement
Phase of
Summative
Evaluation
 Purpose is to examine congruence between
organization’s stated needs and instructional materials
 Closer training is aligned to organization’s goals and
needs
 The closer the tasks in the job analysis description and
current organization’s needs match goals and objectives
of instruction
 More likely learners will achieve the skill and transfer
them
Resources
 Materials that are costly often run out of budget for
maintenance by organization
 Information gathered from congruence analysis should be
shared with appropriate decision makers
Content
Analysis
 Provide experts with copies of materials and ask them to
judge accuracy, currency, and completeness of the
materials for the organization’s stated goals
 Obtain design documents from group that produced
instruction and the expert to use them as a standard
against which to evaluate accuracy and completeness of
instructional materials
Design
Analysis
 Evaluator should judge abundancy of components of
instructional strategy included in materials
 As an external evaluator, you should take steps to find out
about learners’ characteristics
Transfer
Feasibility
Analysis
 Evaluator must know whether critical aspects of job were
adequately simulated in learning contexts
 Supervisor capabilities, equipment, or environments
 Potential for transfer within such cultures is slim
 Evaluators can request posttest data from organization
providing instruction to determine whether skills were
actually learned
Existing
Materials
Analysis
 The proliferations of e-learning over the past few years
has many organizations scrambling for quality
instructional materials for this delivery format
 Rapid expansion of field without matching expansion in
graduate programs preparing instructional designers
 Designers today lack advanced degrees
 Additional work for external evaluators who can use
instructional design materials
Impact Phase
of Summative
Evaluation
 Impact analysis is conducted within organization
 Includes:
 Focusing study, establishing criteria and data needs,
selecting respondents, planning study procedures,
summarizing and analyzing data, reporting results,
negotiating resources
Focusing
Impact Study
 Center your study in the workplace
 Evaluator must shift from perspective of instruction to a
perspective on organization
 Review organization’s goals
 Question should yield information for impact analysis
 Initial contact can sink a study if company personnel are
not approached appropriately
Establishing
Criteria and
Data
 Criteria and data in performance site vary from one
context to another
 Data gathering methods depend on the resourced
available for study
Selecting the
Respondents
 Nature of information you need and particular questions
assist you in planning types and number of persons who
participate in your study
 Learners/employees have insight into whether and how
they use the skills, and if not, why
 Peers and subordinates of learners selected may also
offer insights into effectiveness of instruction
Planning
Study
Procedures
 Consider whether you need preceding posttest data from
training organization
 Decisions about how to collect data include issues of
sampling, data collection, and data analysis
 Data collection procedures depend on type of data you
have decided to collect
Comparison of
Formative and
Summative
Evaluations
 Differ in several aspects
 Formative evaluations are undertaken to locate
weaknesses and problems in instruction in order to revise
it
 Formative evaluation includes three stages:
 One-to-one, small group, and field trial
Comparison of
Formative and
Summative
Evaluations
 Impact evaluation is conducted with target learners after
they have returned to their jobs and focuses on jobsite
 Materials subjected to formative evaluations typically
have different developmental histories,
Comparison of
Formative and
Summative
Evaluations
 Final difference between the two evaluations in the
outcome
 Results of a formative evaluation include prescriptions for
revising instruction and actual materials revisions among
the three stages of evaluation
Summary
 Summative evaluations are conducted to make decisions
about whether to maintain, adopt, or adapt instruction.
The primary evaluator in a summative evaluation is rarely
the designer or developer of the instruction. Instructional
designers make excellent summative evaluators because
of their understanding of well-designed. The design of the
expert judgement phase of summative evaluation is
anchored in the model for systematically designing
instruction.
Contact
Information
 Alyssa Singleterry
 asingleterry4436@myasu.alasu.edu

Ch. 12 powerpoint apt 501

  • 1.
    Designing and Conducting SummativeEvaluations Alyssa Singleterry
  • 2.
    Background  Attempts toimprove student achievement by tinkering with this or that component of a course can be frustrating, often leading an instructor or course manager to explain low performance as a student problem. The instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact and work with each other to bring about desired student learning outcomes. Changes in one components can directly affect one another. The instructional process can be viewed as a system whose purpose is to bring about learning.
  • 3.
    Objectives  Describe thepurpose of summative evaluation  Describe the two phases of summative evaluation and the decisions resulting from each phase  Design an expert judgement phase of summative evaluation  Design an impact phase of summative evaluation  Contrast formative and summative evaluation by purpose and design
  • 4.
    Expert Judgement Phase of Summative Evaluation  Purposeis to examine congruence between organization’s stated needs and instructional materials  Closer training is aligned to organization’s goals and needs  The closer the tasks in the job analysis description and current organization’s needs match goals and objectives of instruction  More likely learners will achieve the skill and transfer them
  • 5.
    Resources  Materials thatare costly often run out of budget for maintenance by organization  Information gathered from congruence analysis should be shared with appropriate decision makers
  • 6.
    Content Analysis  Provide expertswith copies of materials and ask them to judge accuracy, currency, and completeness of the materials for the organization’s stated goals  Obtain design documents from group that produced instruction and the expert to use them as a standard against which to evaluate accuracy and completeness of instructional materials
  • 7.
    Design Analysis  Evaluator shouldjudge abundancy of components of instructional strategy included in materials  As an external evaluator, you should take steps to find out about learners’ characteristics
  • 8.
    Transfer Feasibility Analysis  Evaluator mustknow whether critical aspects of job were adequately simulated in learning contexts  Supervisor capabilities, equipment, or environments  Potential for transfer within such cultures is slim  Evaluators can request posttest data from organization providing instruction to determine whether skills were actually learned
  • 9.
    Existing Materials Analysis  The proliferationsof e-learning over the past few years has many organizations scrambling for quality instructional materials for this delivery format  Rapid expansion of field without matching expansion in graduate programs preparing instructional designers  Designers today lack advanced degrees  Additional work for external evaluators who can use instructional design materials
  • 10.
    Impact Phase of Summative Evaluation Impact analysis is conducted within organization  Includes:  Focusing study, establishing criteria and data needs, selecting respondents, planning study procedures, summarizing and analyzing data, reporting results, negotiating resources
  • 11.
    Focusing Impact Study  Centeryour study in the workplace  Evaluator must shift from perspective of instruction to a perspective on organization  Review organization’s goals  Question should yield information for impact analysis  Initial contact can sink a study if company personnel are not approached appropriately
  • 12.
    Establishing Criteria and Data  Criteriaand data in performance site vary from one context to another  Data gathering methods depend on the resourced available for study
  • 13.
    Selecting the Respondents  Natureof information you need and particular questions assist you in planning types and number of persons who participate in your study  Learners/employees have insight into whether and how they use the skills, and if not, why  Peers and subordinates of learners selected may also offer insights into effectiveness of instruction
  • 14.
    Planning Study Procedures  Consider whetheryou need preceding posttest data from training organization  Decisions about how to collect data include issues of sampling, data collection, and data analysis  Data collection procedures depend on type of data you have decided to collect
  • 15.
    Comparison of Formative and Summative Evaluations Differ in several aspects  Formative evaluations are undertaken to locate weaknesses and problems in instruction in order to revise it  Formative evaluation includes three stages:  One-to-one, small group, and field trial
  • 16.
    Comparison of Formative and Summative Evaluations Impact evaluation is conducted with target learners after they have returned to their jobs and focuses on jobsite  Materials subjected to formative evaluations typically have different developmental histories,
  • 17.
    Comparison of Formative and Summative Evaluations Final difference between the two evaluations in the outcome  Results of a formative evaluation include prescriptions for revising instruction and actual materials revisions among the three stages of evaluation
  • 18.
    Summary  Summative evaluationsare conducted to make decisions about whether to maintain, adopt, or adapt instruction. The primary evaluator in a summative evaluation is rarely the designer or developer of the instruction. Instructional designers make excellent summative evaluators because of their understanding of well-designed. The design of the expert judgement phase of summative evaluation is anchored in the model for systematically designing instruction.
  • 19.
    Contact Information  Alyssa Singleterry asingleterry4436@myasu.alasu.edu