2. Background
This chapter will address chapter twelve including collecting
data to identify problems and to revise instructional materials.
During the revision process, designers must keep a systems
perspective on their work and remain open to possibilities that
revisions may be warranted in any stage of the design process.
Although a number of studies indicate the benefit of revising
instructional materials, few propose any theories around which to
gather the data. In our approach to formative evaluation, we
interpret the data in light of our instructional strategy and then make
changes that seem to be indicated by the data and our understanding
of the learning process.
Two basic types of revisions are changes made to the content
or substance of the materials to make them more accurate or more
effective as a learning tool, and changes related to the procedures
used in using your materials. In this chapter we point out how data
from various formative evaluation sources can be summarized and
used to identify portions of your materials that should be revised.
3. Objectives
• Describe various methods for summarizing
data obtained from formative evaluation
studies 

• Summarize data obtained from formative
evaluation studies 

• Given summarized formative evaluation
data, identify weaknesses in instructional
materials and instructor-led instruction

• Given formative evaluation data that
indicates problems in a set of instructional
materials, identify problems in the materials
and suggest revisions. 

4. Data analysis for one-to-one
trials
Following the one-to-one formative evaluation,
the designer has very little data because
information is typically available for only three to
five learners. Because these learners were selected
based on their diversity, the information they
provide will, in all likelihood, be very distinct,
rather than blending into some type of group
average. The designer has five kinds of basic
information available; learner characteristics and
entry skills, direct responses to an instruction,
learning time, posttest performance, and responses
to an attitude questionnaire. The first step is to
describe the learners who participated in the one-
to-one evaluation and to indicate their
performance on any entry skill measures.
5. Data analysis for small group and
field trials
The smallest group formative evaluation provides the designer with
a somewhat different data summary evaluation. The fundamental
unit of analysis for all the assessments is the individual assessment
item. The individual item information is required for three reasons:
1. The information can be useful in deciding if there are
particular problems with the item or if it is measuring the
performance described in its corresponding objective
effectively. 

2. Individual item information can be used to identify the nature
of the difficulties learners are having with the instruction.

3. Individual item data can be combined to indicate learner
performance on an objective is expressed in terms of getting a
certain percentage of items correct on a set of items.

6. Groups item by objective
performance
Although any number of computer based programs
for data analysis can be used to create students
performance summaries, we recommend
spreadsheet programs such as EXCEL because they
are readily available and easy to use. The purpose
for the item-by-objective analysis is threefold:
• to determine the difficulty of each item for the group.

• to determine the difficulty of each objective for the
group.

• to determine the consistency with which the set of
items within an objective measures learners’
performance on the objective.

7.
8. Learners item-by-objective
performance
• The second type of analysis is an individual learner
performance.

• The first two of these columns contain the number and
percentage of items answered correctly by each learner. 

• the last two columns contain the number and percentage of
objectives mastered by each learner.

• If these data represented performance on entry behaviors or
skills to be included in the instruction.

9. Graphing learners performances
• Another way to display data is through
various graphing techniques. 

• You may also want to graph the amount of
time required to complete the instructional
materials. 

• Another graphic technique for summarizing
formative evaluation data involves the
instructional analysis chart.

• The designer uses a copy of the instructional
analysis chart without the statement of
skills.

10. Sequence for examining data
As you prepare summaries of your data, you
quickly begin to get an overall picture of the
general effectiveness of your instructional
materials and the extent of revisions.
• After removing data for any defective items
you should examine the remaining data with
regard to the entry skills of learners.

• Review the pretest and posttest data is
displayed on the instructional analysis chart.

• Examine the pretest scores to determine the
extent to which individual learners. 

11. Learning time
• An important concern in any formative evaluation is the
amount of time required by students to complete the
instructional materials.

• This is an extremely difficult task, and it must be done
with great care.

• Knowing what to remove from the materials or to
change without interfering with learning is very difficult
to determine.

• Often the decision can be made only after a trial/revise/
trial/revise process with target learners. 

12. Revision process
• First summarize your data as suggested in this chapter. 

• Any problems identified with a component, the changes
being proposed based on the problems , and the
evidence gathered illustrating the problem.

• Given all the data from a small group evaluation or field
trial, the designer must make decisions about how to
make the revisions. 

• When summarizing data from the field evaluation, be
careful to summarize it in an accurate and clear fashion. 

13. Revision of selected materials and
instructor led instruction
• Omit portions of the instruction

• Include other available materials

• Simply develop supplementary instruction 

Procedures for the use of materials should also
be reconsidered in light of formative evaluation
data.
14. Analysis of item by objective data
across tests
• Each of the enhancing and stifling actions were exhibited
by the leader three times during the simulated meeting.

• Learners were given credit if their tally was within one
point of the exhibited actions.

• The row totals were obtained by summing the shaded
action pairs within each learners row.

• With the pretest data summarized in this manner, the
analysis and interpretation began.

15. Analysis of data across tests
• The left side of the graph contains percentage levels
used to identify the percentage of the twenty students
who mastered each of the twelve behaviors 

• This high level of performance across the twelve
skills and the learners growth between the pretest and
the practice activity indicate that instruction was
effective in helping learners

16. Analysis of attitudinal data
Items indicating potential problems can
be flagged. In this instance, a
potentially problematic question was
defined as one having a mean or
average score of 3 or lower and an
asterisk was placed to the left of items
with means in this area. Related to
learners' perceptions of their attention
levels during instruction, they were
attentive during all activities, and they
believed all objectives covered were
relevant to their goals as a group
leaders.
17. Plans for revising instruction
• It is premature to make final decisions about
changes in the materials for one segment of
a total unit of instruction

• The changes should be based on the overall
effectiveness of the unit

• As the designers moved through the
formative evaluation process, they noted that
changes made in the materials could have
consequences other than the ones anticipated

18.
19. Summary
The data you collect during the
formative evaluation should be
synthesized and analyzed to locate
potential problems in the
instructional materials. Your data
summaries should include learners
remarks in the materials, their
performance on the pretest and
posttest, their responses on the
attitude questionnaire, their
comments during debriefing
sessions, and information gained
from the performance context.