2. Reflecting on
Curriculum Implementation
• How was your first semester in 2011/2012?
• How was Course Delivery?
• How was Course Evaluation ?
• How did your students’ perform ? Why?
• What/ Who do your colleagues attribute this
to?
3. Inter-relatedness of
Curriculum Elements
OBJECTIVES
CURRICULUM
CONTENT
METHODOLGY
DEVELOP MENT
EVALUATION
5. Understanding the Numbers Game
Coursework
• Project - 25%
• Group Work – Research and Presentation 30%
• Essay – 20%
• Class Test - 15%
How do we find the coursework grade?
6.
7. Getting Ahead of Our Game
• We were responsible for writing the courses
• We designed the evaluation – type & weighting
• Coursework
• Final Examination
• And how the two were to be combined
• We identified
– essential vs non-essential content
9. Without a Table of Specification assessment will
produce scores that are of limited use and
interpretation.
TOS will help all our stakeholders
Students - study guide
Lecturers – teaching guide; reporting;
accountability
Ministry – accountability; accreditation
10. What can we do, at this point?
• We MUST develop our Table of Specifications
and stand committed to using it as our
blueprint.
• It is against this matrix that our courses will be
evaluated.
11. Establish Undergirding Principles
• Our Philosophy on Teaching and Learning by
students – [bell curve phenomenon?]
• Criterion Referenced Testing
• Mastery Learning
• Item Difficulty
• Validity
12. Item Difficulty
The difficulty of the test depends on its purpose.
• To monitor performance of all students the
distribution of difficulty should match the
distribution of achievement of the target
population.
2/3 of test 30-70 % answer correctly
1/3 of test More than 70% answer correctly
Items that 30% are likely to answer
• NOTE: The exclusion of important areas of the
curriculum simply because students perform very
poorly or very well on them.
13. NRT vs CRT
• "Norm-Referenced Assessment: Assessment
designed to provide a measure of performance
that is interpretable in terms of an individual's
relative standing in some known group.
Criterion-Referenced Assessment: Assessment
designed to provide a measure of performance
that is interpretable in terms of a clearly defined
and delimited domain of learning tasks." (p. 42)
Linn and Gronlund (2000)
14. Understanding how CRT relate to
Mastery Learning
• "... include items that are directly relevant to the
learning outcomes to be measured, without
regard to whether the items can be used to
discriminate among students. No attempt is
made to eliminate easy items or alter their
difficulty.
• The goal of the criterion-referenced test is to
obtain a description of the specific knowledge
and skills each student can demonstrate. " (p. 43)
15. Power Tests vs Speed Test
POWER TEST
• On a Power Test all students are given enough time
to attempt and answer all items.
• Items are arranged in a hierarchy from knowledge
level (easy) to increasing difficulty.
• A power test should be administered so that a very
large percentage (90% is an acceptable minimum) of
the pupils for whom it is designed will have ample
time to attempt all of the items.
16. Power Tests vs Speed Test
SPEED TEST
• A speed test is one in which a student must, in a limited
amount of time, answer a series of questions or perform a
series of tasks of a uniformly low level of difficulty.
• The intent of a speed test is to measure the rapidity with
which a pupil can do what is asked of him or her.
• Speed of performance frequently becomes important after
students have mastered task basics as in using a keyboard,
manipulatives, or phonics.
• Tests are often a mixture of speed and power even when
achievement level is the test's purpose. Such tests are called
partially speeded tests.
17. Considering TIME vs Weighting
Teachers must check time limits carefully to be sure that all students will have
the opportunity to address each test item adequately before the allotted
time is up.
• The amount of time for the test is determined before test construction
and is facilitated by using a Table of Specifications.
• Testing time is determined by :
– the number of objective to be tested;
– Coverage and complexity of objectives;
– levels of acceptable performance
– Demographics of students - age and ability levels,
– class time available,
– types of test items,
– length and complexity of test items.
• Teachers must check time limits carefully to be sure that all
students will have the opportunity to address each test item
adequately before the allotted time is up.
18. Exploring Content
Content can be classified in many ways
• Cognitive
– Declarative; Procedural; Strategic (problem solving)
– Using Blooms Taxonomy
• Psychomotor
• Affective
Classifying Content is important because different
types of knowledge; skills and attitudes are best
assessed using specific strategies
19. Declarative Knowledge
• Factual information stored in memory and
known to be static.
• Knowledge about something; describes how
things are. Things/events/processes, their
attributes, and the relations between these
things/events/processes and their attributes.
20. Procedural Knowledge
• Knowledge of how to perform, or how to
operate. [Know-how]. It involves making
discriminations, understanding concepts, and
applying rules that govern relationships and
often includes motor skills and cognitive
strategies.
21. Strategic Knowledge
• Information that is the basis of problem
solving, - action plans to meet specific goals;
knowledge of the context in which procedures
should be implemented; actions to be taken if
a proposed solution fails; and how to respond
if necessary information is absent.
22. Content-Process Validity
• TOS ensures that items are representative of
materials taught –adequate sampling
• The intellectual reasoning level [process] used
during instruction and intended by the
curriculum designers is mirrored in the
assessment.
23. Facts about Knowledge
• All knowledge starts out as declarative
information and procedural knowledge is
acquired through inferences from already
existing knowledge.
• It is said that one becomes more skilled in
problem solving when he relies more on
procedural knowledge than declarative
knowledge.
24. Table of Specifications
A Table of Specifications classifies each test item
according to what topic or concept it tests
AND what objective it addresses.
26. Purpose of TOS
• To ensure that there exists correspondence
between the learning objectives for the
students and the content of the course.
• To ensure proper organization of assessment
procedures that best represent the material
covered in the teaching/learning process.
27. Benefits of a TOS
• Ensures that an assessment has content validity— ie the
tests what it was suppose to test; a match between what
was taught and what is tested.
• Ensures that the same emphasis on content during
instruction is mirrored on assessment (e.g., more items
about topic X and fewer about topic Y because you consider
X to be more important and you spent more time on X)
• Ensures alignment of test items with objectives (e.g.,
important topics might include items that test
interpretation, application, prediction, and unimportant
topics might be tested only with simpler recognition items
• Ensures that content is not overlooked or
underemphasized
28. Framework of TOS
• A Table of Specifications consists of a two-way chart or grid
(Kubiszyn & Borich, 2003; Linn & Gronlund, 2000; Mehrens &
Lehman, 1973; Ooster, 2003) relating instructional objectives to the
instructional content.
• The column of the chart lists the objectives or "levels of skills"
(Gredler, 1999, p.268) to be addressed;
• The rows list the key concepts or content the test is to measure.
• "We have found it useful to represent the relation of content and
behaviors in the form of a two dimensional table with the
objectives on one axis, the content on the other. The cells in the
table then represent the specific content in relation to a particular
objective or behavior" (Bloom, et al. (1971),
29. Going Forward
• Examine the Units according to proposed
Duration and the identify weightings
• Classify the objectives according to its
emphasis on knowledge/ skills/ attitudes for
each units
30. COGNITION PSYCHOMOTOR AFFECTIVE TOTAL
UNIT 1 :
Understanding Self
UNIT 2:
Diversity in the
Classroom
UNIT 3:
Professional Ethics and
Teacher Relationships
SUBTOTAL
TOTAL
31. COGNITION
COMPREHENSION
KNOWLEDGE
APPLICATION
EVALUATION
SYNTHESIS
ANALYSIS
PSYCHOMOTOR AFFECTIVE TOTAL
UNIT 1 :
Understanding
Self
UNIT 2:
Diversity in the
Classroom
UNIT 3:
Professional
Ethics and
Teacher
Relationships
SUBTOTAL
TOTAL
32. Factors that can Influence the Design
of the TOS
• Persons understanding about the content
being measured
• Person’s understanding of the purpose of
Assessment
• Time and resources will not permit the testing
of every objective/content on a syllabus
Therefore the CORE SKILLS/ ESSENTIAL TOPICS
ought to be agreed upon by the experts.
33. Let’s Not Invent the Wheel
Assessment instruments are available online
• Checklists
• Rubrics
• Rating scales
34. Next Step
Complete a TOS for all courses being
examined this semester
Ensure that our colleagues when submitting
draft questions for final exams indicate the
objective/ objectives being assessed
If lecturers teaching a course in your board agree
on the TOS. Then there will be no need for vetting.
35. Let’s remain
CONFIDENT and COMMITTED
Nobody trips over mountains. It is the small
pebble that causes you to stumble. Pass all
the pebbles in your path and you will find you
have crossed the mountain.
~Author Unknown
Consider the postage stamp: its
usefulness consists in the ability to stick to
one thing till it gets there. ~Josh Billings
36. Let’s commit ourselves to our Mission
Let’s vow to sticking to each small task
Ensure the TEAM remains Motivated
Together Everyone Achieves More,
I Thank You
37. Reference
• Anderson, P & G. Morgan (2008)Developing Tests
and Questionnaires for a National Assessment of
Educational
http://www.uis.unesco.org/Education/Document
s/National_assessment_Vol2.pdf
Gredler, M. E. (1999). Classroom Assessment and
Learning. New York: Longman, an imprint of
Addison Wesley Longman, Inc.
Linn, R. L., & Gronlund, N. E. (2000). Measurement
and assessment in teaching (8th ed.). Upper
Saddle River, NJ: Prentice Hall.