the effect of
MOVING ALONG NICELY…
Once you have decided on the goals and
objectives of the instruction and have
organized the instructional environment and
activities, you will need to decide how to
measure the learner’s development and
evaluate success of your overall instructional
In other words, CAN YOU & DID YOU REALLY
DO what you claimed you could.
REVIEW SOME BASIC ENGLISH FIRST
Assessment refers to procedures or
techniques used to obtain data about a
learner or product.
Measurement refers to the data collected,
which is typically expressed quantitatively.
The physical devices that are used to collect
the data are referred to as instruments.
Evaluation refers to a process that includes
assessment and measurement.
Worthen, Sanders, & Fritzpatrick (2004) wrote that
the purpose of evaluation is the “ identification,
clarification, and application of defensible criteria to
determine an evaluation object’s value (worth or
merit), quality, utility, effectiveness or significance in
relation to those criteria” (p.5)
Evaluation is used to determine the success level of
T YPES OF EVALUATION
Morrison, Ross & Kemp (2007) “Formative evaluation is quality
control of the development process” (p.249).
Formative evaluation is used throughout the instructional design
process to ensure that the intervention being developed is tested
and revised. During a lesson, you may conduct formative
assessment to ensure that learners are on task/on track and
that the “intervention” or “intended training” is EN ROUTE.
As the word would suggest, this evaluation is conducted at the
end of the instructional design process to determine how
successful the entire project was in helping to meet the major
goal. Summative evaluation can be at the end of a lesson, at the
end of the term, at the end of the semester. The GRAND FINALE.
THE GOAL OF LEARNER EVALUATION
Determining if a learner has reached a high level of
success is accomplished through learner evaluation .
Learner evaluation helps determine the level of
performance or achievement that an individual has
attained as a result of instruction.
An effective and appropriate learner evaluation is
BASED DIRECTLY ON the instructional goals and
A learner evaluation has validity if it helps determine
whether the outcomes of instruction (based on the
objectives) were actually met.
In other words, did the learners meet the
Face Validity (concerned with how the learner
evaluation appears: is it reasonable, well -designed,
capable of gathering appropriate data)
Content Validity (concerned with the extent to which
the specific intended domain of content is
addressed in the evaluation).
Reliability is the extent to which a learner
evaluation will provide similar results when
conducted on multiple occasions.
In other words, if a test is given to the same
learner at different times without the learner
receiving any additional prep, and the same
results occur then the test is reliable.
STARTING WITH INSTRUCTIONAL OBJECTIVES
It is extremely important to understand that
instructional objectives are a key element in the
development of an effective learner evaluation.
The Learner Evaluation is derived from the
Well-written instructional objectives describe the
outcome a learner should be able to achieve after
instruction has been completed.
Three typical outcomes are possible.
A change in KNOWLEDGE, SKILL, OR ATTITUDE.
PULL BACK OUT YOUR 3 OBJECTIVE DOMAINS and
(Cognitive, Psychomotor & Affective)
Assessment varies across the domains.
(TESTING A CHANGE IN KNOWLEDGE)
(questions are referred to as ‘items’,
such as true/false, multiple choice, and matching, that have
ONE CORRECT ANSWER)
(include short answer
and essay items, which focus on the learner constructing an
answer rather than selecting one that is provided).
Objective tests deal with low -level cognitive abilities
Constructed-response tests deal with higher -level cognitive
(TESTING A CHANGE IN SKILL)
Evaluating whether a learner has had a change in
SKILL is done through examining actions or
behaviours that can be directly observed.
Process (evaluate the proper series of steps, use of
appropriate tools or instrument in an acceptable manner,
completion of the skill in a certain timeframe.)
Product (the quality and quantity of the product)
DIRECT TESTING, PERFORMANCE RATINGS, e -Portfolios,
GUIDELINES FOR DIRECT TESTING
• Start by reviewing the task analysis. This will determine
the steps of the skill the learner needs to perform.
• Those steps form the criteria.
• Determine the level of proficiency that is considered
• Determine where the test will take place, what
equipment, materials and personnel are needed.
• Write the instructions that inform the learner of how the
test will be conducted.
• Establish how the results will be recorded.
• Conduct the test, judge the proficiency, provide
Two common techniques used are checklists and
A checklist is used to determine the sequence of
actions. NO qualitative judgment is made on how
well the actions were performed, simply if the action
was performed or not.
A rating scale (Likert Scale) provides a rating of how
well a learner performs different actions. Typically, it
is a rating on a numerical scale that indicates
performance from low to high, poor to excellent etc.
GUIDELINES FOR THE RATING PROCESS
• Review the task analysis. This helps to determine
the steps of the skill the learner will need to
perform. The steps will serve as the individual
critique on which the learner will be evaluated.
• Establish the 5 levels on the rating scale.
• When defining the numerical rating points, verbal
descriptions should not be overlapping.
A NOTE ABOUT RUBRICS
Rubrics can be used in conjuction with various
learner evaluation techniques
A rubric is an assessment instrument that is used to
evaluate a learner based on his performance of
Rubrics can be used with KNOWLEDGE, SKILL and
ATTITUDE-related instructional objectives.
Rubrics are a more holistic approach.
Rubrics allow evaluation to be more objective and
consistent by clarifying the criteria to be evaluated in
specific and descriptive terms.
(TESTING A CHANGE IN ATTITUDE)
It is well-accepted that when conducting a learner
evaluation to determine a change in attitude, there are
problems that can occur. The most common problems
are: social-desirability responses, self-deception, and
Self-deception is a common adjustment phenomenon in human
behaviour where the tendency to want to like what we see when
we look at ourselves impacts how we respond (Hopkins, 1998).
Observations & Anecdotal Records, Surveys &
Questionnaires, Self-Reporting Inventories, Interviews