Summative evaluation is conducted in two phases: the expert judgment phase and the field trial phase. The expert judgment phase involves analyses of congruence between materials and organizational needs, content accuracy, instructional design, utility, and feedback from current users. This determines if materials have potential to meet organizational goals. The field trial phase tests effectiveness with target learners and examines impact on learning, job performance, and the organization. It also assesses implementation feasibility and costs. Summative evaluation is used to make "go/no-go" decisions on retaining or replacing instructional materials based on their ability to address an organizational problem.
1. D E S I G N I N G A N D C O N D U C T I N G
S U M M AT I V E E VA L U AT I O N S
BY R A H E E N B R O O K S
2. OBJECTIVES
• Define the purpose of summative evaluation
• Describing the two phases of summative evaluation and decisions
resulting from each phase
• Designing a summative evaluation to examine organizational
benefits of instruction they have implemented
• Contrasting formative and summative evaluation by purpose and
design
5. SUMMATIVE EVALUATION?
Summative evaluation is defined as the design of
evaluation studies and collection of data to verify the
effectiveness of instructional materials with target
learners.
7. PURPOSE OF SUMMATIVE
EVALUATION?
• To make “go-no-go” decisions
oRetain current materials?
oLook for something better suited to meet organization’s specific
instruction needs?
• Summative evaluations are used to judge the impact of a plan of
instruction on the organization’s initial problem.
8. EVALUATORS: WHY SHOULD
THEY BE EXTERNAL
More often than not the primary evaluator in a summative evaluation
is rarely the designer or developer of the instruction. The reason why
is because the evaluator is often unfamiliar with the materials, the
organization requesting the evaluation, or the setting in which the
materials are evaluated. It is preferred because they have no personal
investment and will likely be more objective.
10. EXPERT JUDGMENT PHASE
Purpose: Do the materials have the potential for meeting this
organization’s needs?
There are several activities that decide whether the candidate
instruction is promising they are the following:
1. Congruence Analysis
2. Content Analysis
3. Design Analysis
4. Utility and Feasibility Analysis
5. Current User Analysis
11. EXPERT JUDGMENT PHASE:
CONGRUENCE ANALYSIS
This phase consists of analyzing the congruence among the following:
1. An organization’s state needs and goals and those addressed in
candidate instruction
2. An organization’s target learners’ entry skills and characteristics
and those for which candidate materials are intended
3. An organization's resources and those required for obtaining and
implementing candidate instruction
12. EXPERT JUDGMENT PHASE:
CONTENT ANALYSIS
During this phase, an identified expert is used to judge
material for accuracy and completeness to determine if they
are inline with the organization’s stated goals. An
instructional analysis of the stated goal is a very cost
effective method.
13. EXPERT JUDGMENT PHASE:
DESIGN ANALYSIS
In this phase design analysis is used as an evaluation
of the adequacy of the components of the instructional
strategy included in the candidate material. Checklist
are great during this activity.
14. EXPERT JUDGMENT PHASE:
UTILITY AND FEASIBILITY ANALYSIS
In this phase factors such as the availability of a learner
guide or syllabus and an instructor’s manual are taking into
consideration during this activity. This is also the time when
you get information for the people that determine that the
evaluation was necessary.
15. EXPERT JUDGMENT PHASE:
CURRENT USER ANALYSIS
In this phase the final analysis seeks to get
information about the candidate material from the
organizations that are experience in using them. The
names of the current users can often be obtained from
publishers of the material.
16. FIELD TRIAL PHASE
Purpose:
Are the materials effective with target learners in prescribed setting?
Outcome Analysis:
1. Impact on Learners
2. Impact on Job
3. Impact on Organization
17. FIELD TRIAL PHASE
Management Analysis:
Deals with the fact of trying to answer three questions.
1. Are instructor and manager attitudes satisfactory?
2. Are recommended implementation procedures feasible?
3. Are costs related to time, personnel, equipment, and
resources reasonable?
18. COMPARING FORMATIVE AND
SUMMATIVE EVALUATION
Formative Evaluation Summative Evaluation
Purpose Locate weaknesses in
in order to revise it
Documents strengths and
weaknesses in instruction in
order to decide whether to
maintain or adopt it
Phases or Stages One-to-one
Small group
Field trial
Expert judgment
Field trial
Instructional Development
History
Systematically designed in-
house and tailored to the
of the organization
Produced in-house or
elsewhere not necessarily
following a system approach
Materials One set of materials One set of materials or several
competing sets
Position of Evaluator Member of design and
development team
Typically an external elevator
Outcomes A prescription for revising
instruction
A report documenting the
design, procedures, results,
recommendations, and
rationale
19. CHANGE AGENT
As a change agent, is it very important for us to
constantly evaluate the instructional strategy ensure
that is staying the course with what the organization
has in mind while being usable and feasible for the
organization. The summative evaluation does just that.