ASSESSMENT IN
LEARNING 1
OSCAR O. ANCHETA JR.
Instructor
Class Orientation
Class Orientation
Class Orientation
Choose a word from the Word Cloud
and explain its relevance/relationship to
assessment!
LESSON 1:
Basic Concepts and
Principles in
Assessing Learning
LEARNING OUTCOMES:
In this lesson, you are expected to:
describe assessment in learning
and related concepts and,
demonstrate understanding of
the different principles in
assessing learning.
What is assessment?
 Rooted in the Latin word assidere, which
means “to sit beside another.”
 Generally defined as the process of gathering
quantitative and/or qualitative data for the
purpose of making decisions.
What is Assessment in
Learning?
 A systematic and purpose-oriented collection,
analysis, and interpretation of evidence of
student learning in order to make informed
decisions relevant to the learners.
 It aims to use evidence on student learning to
further promote and manage learning.
 Assessment in learning can be characterized
as (a) a process, (b) based on specific
objectives, and (c) from multiple sources.
What is measurement?
 The process of quantifying the attributes of
an object
What is evaluation?
 Refers to the process of making value
judgements on the information collected from
measurement based on specified criteria.
What is Testing?
 Testing is the most common form of
assessment.
 Refers to the use of a test or battery of tests
to collect information on student learning over
a specific period of time.
 Can be categorized as either a selected
response (objective format) or constructed
response (subjective format).
What is the significance of
TOS to the test?
 The Table of Specification (TOS) maps out
the essential aspects of a test (test objective,
contents, topics, item distribution).
 Is used in the design and development of
tests.
When is a test considered to
be good and effective?
 If it has acceptable psychometric properties.
 This means that a test should be
valid, reliable, has acceptable
level of difficulty, and can
discriminate between learners
with higher and lower ability.
What is grading?
 The process of assigning value to the
performance or achievement of a learner
based on specified criteria or standards.
 Grades can be based from, recitation,
seatwork, homework, projects, and tests.
 Grading is a form of evaluation which
provides information whether a learners
passed or failed a subject or a particular
assessment task.
What are the different measurement
framework used in assessment?
1. Classical Test Theory (CTT)
2. Item Response Theory (IRT)
What is Classical Test Theory?
 Known as true score theory, this explains that
variations in the performance of examinees on a
given measure is due to the variations in their
abilities.
 Assumes that all measures are imperfect
(affected by internal and external conditions.
 Provides an estimation of the item difficulty
based on the frequency or number of examinees
who correctly answer a particular item.
 Provides an estimation of item discrimination
based on the number of examinees with a higher
or lower ability to answer a particular item.
What is Item Response Theory?
 Analyzes test items by estimating the
probability that an examinee answers an
item correctly or incorrectly.
 Assumes that the characteristics of an item
can be estimated independently of the
characteristics or ability of the examinee
and vice versa.
 Aside from item difficulty and
discrimination, ITR analysis can provide fit
statistics and item characteristics curve.
What are the different types of
assessment in learning?
1. Formative Assessment
2. Summative Assessment
3. Diagnostic Assessment
4. Placement Assessment
5. Traditional Assessment
6. Authentic Assessment
1. Formative Assessments
 Provides information to both teachers and
learners on how they can improve the
teaching-learning process.
 Used at the beginning and during instruction
for teachers to assess learner’s
understanding.
 Can be used to inform learners about their
strengths and weaknesses to enable them to
take steps to learn better and improve their
performances as the class progresses.
2. Summative Assessments
 Aims to determine learners’ mastery of content or
attainment of learning outcomes.
 Typically used for evaluating learners’ performance in
class and providing teachers with information about
the teaching effectiveness of their teaching strategies
and how they can improve their instruction in the
future.
 Can inform learners about what they have done well
and what they need to improve on in their future
classes or subjects.
3. Diagnostic Assessment
 Aims to detect the learning problems or
difficulties of the learners so that corrective
measures or interventions are done to ensure
learning.
 Done right after seeing signs of learning
problems in the course of teaching.
 Can also be done at the beginning of the
school year for a spirally-designed
curriculum.
4. Placement Assessment
 Done at the beginning of the school year to
determine what the learners already know or
what are their needs that could inform the
design of instruction.
 Grouping of learners based on the results of
the placement is done before instruction.
 Example: Entrance Examination
5. Traditional Assessment
 Refers to the use of conventional
strategies/tools.
 Typically used as the basis for evaluating and
grading learners.
 Viewed as an inauthentic type of
assessment.
7. Authentic Assessment
 Refers to the use of assessment strategies or
tools that allow learners to perform or create
a product that is meaningful to the learners.
 The most Authentic Assessments are those
that allow performance that most closely
resemble real-world tasks or applications in
real world settings or environments.
What are the different principles in
assessing learning?
1. Assessment should have a clear
purpose.
2. Assessment is not an end in itself.
3. Assessment is an ongoing,
continuous, and a formative process.
4. Assessment is learner-centered.
What are the different principles in
assessing learning?
5. Assessment is both process and product-
oriented.
6. Assessment must be comprehensive and
holistic.
7. Assessment requires the use of appropriate
measures.
8. Assessment should be as authentic as
possible.
ACTIVITY 1: CONCEPT
MAPPING
DIRECTION:
1. Create a graphic organizer to summarize
and encapsulate the fundamental concepts and
principles involved in assessing learning.
2. Following the illustration, elucidate/explain
the relationships among the concepts.
THANK YOU!
LESSON 2:
Assessment Purposes,
Learning Targets, and
Appropriate Methods
LEARNING OUTCOMES:
In this lesson, you are expected to:
explain the purpose of classroom
assessment and,
formulate learning targets that
match appropriate assessment
methods.
What is the purpose of
classroom assessment?
 Purpose of assessment may be classified in
terms of the following:
1. Assessment of Learning
2. Assessment for Learning
3. Assessment as Learning
1. What is Assessment of
Learning?
 Refers to the use of assessment to determine
learners’ acquired knowledge and skills from
instruction and whether they were able to
achieve the curriculum outcomes.
 It’s generally summative in nature.
2. What is Assessment for
Learning?
 Refers to the use of assessment to identify
the needs of learners in order to modify
instruction or learning activities in the
classroom.
 It is formative in nature and it is meant to
identify gaps in learning experiences of
learners so they can be assisted in achieving
the curriculum outcomes.
3. What is Assessment as
Learning?
 Refers to the use of assessment to help
learners become self-regulated.
 It is formative in nature and meant to use
assessment tasks, results, and feedback to
help learners practice self-regulation and
make adjustments to achieve the curriculum
outcomes.
What are the roles of classroom
assessment in the teaching-learning
process?
1. Formative
2. Diagnostic
3. Evaluative
4. Facilitative
5. Motivational
Focuses on acquiring information on the current
status and level of learner’s knowledge and skills or
competencies.
Focuses on identifying specific learner’s weaknesses
or difficulties that may affect their achievement.
Focuses on measuring learners’ performance or
achievement for the purpose of making judgement or
grading in particular.
Focuses on improving the teaching-learning process.
Focuses on providing mechanisms for learners to be
motivated and engaged in learning and achievement
in the classroom. mechanisms
What are Learning Targets?
 Are statements on what learners are
supposed to learn, and what they can do
because of instruction.
 Learning targets specify both the content and
criteria of learning.
How are Learning targets different and
related to Goals, Standards, and
Objectives?
 Goals are general statements about desired
learner outcomes in a given year or during
the duration of a program.
 Standards are specific statements about what
learners should know and are capable of
doing at a particular grade level, subject, or
course. Types: (1) Content, (2) Performance,
(3) Development, and (4) Grade Level.
 Objectives are specific statements of
learners’ performance at the end of an
instructional unit.
What are the 3 domains of
Bloom’s Taxonomy?
1. Cognitive
2. Affective
3. Psychomotor
Knowledge-based goals
Skills-based goals
Feelings/Emotions-
based goals
Bloom’s Taxonomy of Educational
Objectives in the Cognitive Domain
Revised Bloom’s Taxonomy of
Educational Objectives in the Cognitive
Domain (Anderson & Krathwohl, 2001)
Dimensions of Cognitive
Process
Cognitive
Level
Description Illustrative Verbs
1.
Knowledge
Recall or recognition
of learned materials
like concepts, events,
facts, ideas, and
procedures.
Define, recall, name,
enumerate, and label
1.
Remember
Recognizing and
recalling facts.
Identify, list, name,
underline, recall,
retrieve, locate
Dimensions of Cognitive
Process
Cognitive
Level
Description Illustrative Verbs
2.
Comprehen
sion
Understand the
meaning of learned
material, including
interpretation,
explanation, and
literal translation.
Explain, describe,
summarize, discusses,
and translates
2.
Understand
Understanding what
the information
means.
Describe, determine,
interpret, explain,
translate, and
paraphrase
Dimensions of Cognitive
Process
Cognitive
Level
Description Illustrative Verbs
3.
Application
Use of abstract ideas,
principles, or methods
to specific concrete
situations.
Apply, demonstrate,
produce, illustrate, and
use.
3. Apply Applying the facts,
rules, concepts, and
ideas in another
context.
Apply, employ,
practice, relate, use,
implement, carry out,
and solve.
Dimensions of Cognitive
Process
Cognitive
Level
Description Illustrative Verbs
4. Analysis Separation of a concept
or idea into constituent
parts or elements and
an understanding of the
nature and association
among the elements.
Compare, contrast,
categorize, classifies,
and calculates.
4. Analyze Breaking down
information into parts.
Analyze, calculate,
examine, test,
compare, differentiate,
organize, and classify.
Dimensions of Cognitive
Process
Cognitive
Level
Description Illustrative Verbs
5. Synthesis Construction of
elements or parts from
different sources to form
a more complex or
novel structure.
Compose, construct,
create, design, and
integrate.
6. Create Combining parts to
make a whole.
Compose, produce,
develop, formulate,
devise, prepare,
design, construct,
propose, and re-
organize.
Dimensions of Cognitive
Process
Cognitive
Level
Description Illustrative Verbs
6.
Evaluation
Making judgment of
ideas or methods based
on sound and
established criteria.
Appraise, evaluate,
judge, conclude, and
criticize.
5. Evaluate Judging the value of
information or data.
Assess, measure,
estimate, evaluate,
critique, and judge.
Knowledge Dimensions
KNOWLEDGE DESCRIPTION SAMPLE
QUESTION
1. Factual This is basic in discipline. It tells the facts or bits
of information. This type of knowledge usually
answers questions that begin with who, where,
what, and when.
What is the
capital city of the
Philippines?
2. Conceptual It tells the concepts, generalizations, principles,
theories, and models that one needs to know in a
discipline. Usually, answers questions that begin
with what?
What makes the
Philippines the
“Peal of the
orient sea”?
3. Procedural It tells the processes, steps, techniques,
methodologies, or specific skills needed in
performing a specific task. Usually answers
questions that begin with how.
How do we
develop items
for an
achievement
test?
4.
Metacognitive
It makes one understand the value of learning in
one’s life. It requires reflective knowledge and
strategies on how to solve problems or perform a
task through understanding oneself or context.
Why is teaching
the most
suitable course
for you?
Types of Learning Targets
1. Knowledge
targets
Refers to the factual, conceptual, and
procedural information that learners must
learn in a subject or content area.
Knowledge-based thought processes that
learners must learn. It involves application
of knowledge in problem-solving,
decision-making, and other tasks that
require mental skills.
Use of knowledge and/or reason to
perform or demonstrate physical skills.
Use of knowledge, reasoning, and skills in
creating a concrete or tangible product.
2. Reasoning
targets
3. Skills
targets
4. Product
targets
Sample Learning Targets
Objective Learning Targets
At the end of the lesson, the
students should be able to
demonstrate their ability to
write the literature review
section of a thesis proposal.
K- “I can explain the
principles in writing the
literature review of a thesis
proposal.”
R-”I can argue the
significance of my thesis
through literature review.”
S-I can search and organize
related literature from various
sources.”
P-I can write effective
literature review section of a
thesis proposal.”
What is the relationship of Learning
Targets to assessment?
1. Clarity of Expectations: Learning targets provide clear statements
of what students are expected to know, understand, and be able to
do by the end of a lesson, unit, or course. Assessment measures
whether students have achieved these targets.
2. Alignment: Learning targets should be aligned with curriculum
standards, instructional objectives, and assessment criteria.
Assessments should directly reflect the learning targets to ensure
that they effectively measure student attainment of the intended
knowledge and skills.
3. Assessment Design: Learning targets guide the design of
assessments. Educators develop assessment tasks, questions,
and rubrics based on the specific learning targets to be assessed.
This alignment ensures that assessments are meaningful and
relevant to the learning objectives.
What is the relationship of Learning
Targets to assessment?
4. Feedback and Progress Monitoring: Assessments provide
valuable feedback to both students and teachers regarding student
understanding and progress toward achieving the learning targets.
Through assessment results, teachers can identify areas of strength
and areas needing improvement, while students can gauge their own
learning and identify areas for growth.
5. Differentiation and Personalization: Learning targets help
teachers differentiate instruction to meet the diverse needs of students.
Similarly, assessments can be designed to provide opportunities for
students to demonstrate their understanding in various ways,
accommodating different learning styles and preferences.
6. Goal Setting and Reflection: Learning targets provide a basis for
setting learning goals and objectives. Assessment results inform
students and teachers about progress toward these goals, prompting
reflection on learning strategies and areas for further development.
What is the relationship of Learning
Targets to assessment?
7. Instructional Planning: Learning targets guide instructional
planning by informing the selection of teaching strategies, resources,
and activities that best support student attainment of the desired
learning outcomes. Assessment data also inform instructional decision-
making, allowing teachers to adjust their approaches based on student
needs.
IN SHORT, learning targets GUIDE
teachers in selecting appropriate
assessment methods in learning.
Matching Learning Targets with Paper-
and Pencil Types of Assessment
Learning
Targets
Selected
Response
Constructed
Response
MC T/F MT SA PS Essay
Knowledge AAA AAA AAA AAA AAA AAA
Reasoning AA A A A AAA AAA
Skills A A A A AA AA
Product A A A A A A
NOTE: MC-Matching Type, T/F-True or False, MT-
Matching Type, SA-Short Answer, PS-Problem Solving,
MORE “A” MEAN BETTER MATCHES.
Matching Learning Targets with Other
Types of Assessment
Learning Targets PB TP R O
Knowledge A AAA AAA AA
Reasoning AA AA AAA AA
Skills AA AAA A AA
Product AAA AAA A A
NOTE: PB-Project-Based, P-Portfolio, R-Recitation,
O-Observation, MORE “A” MEAN BETTER
MATCHES.
Activity 2: Case Study Analysis:
“Assessing Learning in a Third-Grade
Mathematics Class”
Background:
Mrs. Thompson teaches third-grade mathematics at Oakridge
Elementary School. She is in the midst of a unit on multiplication and
division, and she wants to assess her student’s understanding of these
foundational concepts. Mrs. Thompson has a diverse group of 25
students in her class, each with varying levels of mathematical
proficiency.
Scenario:
Mrs. Thompson is planning her unit assessment and wants to ensure
that it accurately measures her students' mastery of multiplication and
division skills. She decides to design a variety of assessment tasks to
accommodate different learning styles and abilities.
The assessment includes the following components:
Written Assessment: A written test consisting of a combination of
multiple-choice questions, short-answer questions, and word
problems related to multiplication and division.
Hands-On Activities: Hands-on activities such as manipulative-
based tasks and group problem-solving exercises to assess
students' ability to apply multiplication and division concepts in
real-world contexts.
Performance Tasks: Performance tasks where students
demonstrate their understanding through activities like creating
arrays, solving word problems independently, and explaining their
problem-solving strategies.
As Mrs. Thompson reviews her assessment plan, she considers
how to provide constructive feedback to her students. She wants
to ensure that her feedback is supportive and encourages students
to reflect on their learning progress.
How to Write Your Case
Analysis
ANSWER:
PROBLEM STATEMENT:
Mrs. Thompson, a third-grade mathematics
teacher at Oakridge Elementary School, needs
to design an effective assessment to measure
her students' understanding of multiplication
and division concepts. She faces the challenge
of accommodating diverse learners while
ensuring that the assessment aligns with
learning objectives and provides meaningful
feedback to support student learning.
ANSWER:
Recommendation:
To address these challenges, Mrs. Thompson
should continue with her multifaceted
assessment approach, including written
assessments, hands-on activities, and
performance tasks. She should also implement
strategies for providing personalized and
constructive feedback tailored to each student's
needs and performance.
ANSWER:
Evidence and Supporting Arguments:
Multifaceted Assessment Approach: Incorporating various assessment
components allows Mrs. Thompson to capture a holistic view of her students'
understanding of multiplication and division. Written assessments provide
insight into students' conceptual understanding and procedural fluency, while
hands-on activities and performance tasks assess their ability to apply these
concepts in real-world contexts.
Accommodating Diverse Learners: The inclusion of hands-on activities and
performance tasks enables Mrs. Thompson to cater to students with diverse
learning styles and abilities. Manipulative-based tasks and group problem-
solving exercises offer opportunities for active engagement and promote
deeper understanding among students with varying levels of mathematical
proficiency.
Personalized Feedback: By providing personalized feedback based on
individual student performance, Mrs. Thompson can address specific strengths
and areas for improvement. Constructive feedback encourages students to
reflect on their learning progress and fosters a growth mindset. Additionally,
offering guidance on problem-solving strategies helps students develop critical
thinking skills and enhances their overall mathematical proficiency.
ANSWER:
Conclusion:
In conclusion, Mrs. Thompson's multifaceted assessment
approach, coupled with personalized feedback
strategies, allows her to effectively assess her students'
understanding of multiplication and division concepts in
the third-grade mathematics class at Oakridge
Elementary School. By accommodating diverse learners
and fostering reflection and growth through constructive
feedback, Mrs. Thompson promotes student
engagement, learning, and development in mathematics.
Activity 3: Assessment Plan
SUBJECT
SPECIFIC LESSON
LEARNING OBJECTIVES
LEARNING TARGETS
ASSESSMENT ACTIVITY/TASK
Why use of this assessment?
How does this assessment task/
activity improve your
instruction?
How does this assessment
task/activity help your learners
achieve the intended learning
objectives?
THANK YOU!
LESSON 3:
Different
Classifications of
Assessment
LEARNING OUTCOMES:
In this lesson, you are expected to:
illustrate scenarios in the use of
different classifications of
assessment,
Rationalize the purposes of different
forms of assessment, and
Decide on the kind of assessment to
be used.
What are the different
classifications of assessment?
CLASSIFICATION TYPE
1. PURPOSE EDUCATIONAL
PSYCHOLOGICAL
2. FORM PAPER-AND-PENCIL
PERFORMANCE-BASED
3. FUNCTION TEACHER-MADE
STANDARDIZED
4. KIND OF LEARNING ACHIEVEMENT
APTITUDE
5. ABILITY SPEED
POWER
6. INTERPRETATION OF
LEARNING
NORM-REFERENCED
CRITERION-REFERENCED
1. PURPOSE
a. Educational
 Used in a school setting for
the purpose of tracking the
growth of the learners and
grading their performance.
 Comes from formative and
summative assessments to
provide information about
student learning.
 Parallelism between tasks
provided must be observed.
to track the growth of the
learners and grade.
b. Psychological
 These are tests and scales that
measures and determine the
learner’s cognitive and non-
cognitive characteristics.
 Examples: Ability, aptitude,
intelligence, and critical thinking.
 Affective measures for
personality, motivation, attitude,
interest, and disposition.
 Used by GC for learners’
academic, career, and social
and emotional development.
2. FORM
a. Paper-&-Pencil
 Are cognitive tasks that
require a single correct
answer.
 Usually come in test types
(T or F, Identification, MT,
MC).
 Pertain to a specific
cognitive skill
(Remembering,
Understanding, Applying,
Analyzing, Evaluating, and
Creating).
b. Performance-based
 Requires students to
perform tasks
(demonstrations, arrive at
product, show strategies,
present information).
 Focuses on skills
development that are
complex.
 Usually open-ended, and
each learner arrives with
various possible responses.
Check your understanding!
1. Color the shapes according to the instructions.
2. Connect the dots to reveal the hidden picture.
3. Circle the word that rhymes with 'cat'.
4. Draw a line to match each number to its corresponding
quantity.
5. Write the missing letter to complete the word.
6. Recite a short poem about friendship in front of the class.
7. Build a model of a simple machine using classroom
materials.
8. Demonstrate how to properly wash your hands to keep
germs away.
9. Role-play a scenario where you're showing kindness to a
friend.
10. Present a short story you wrote about your favorite animal
to the class.
3. FUNCTION
a. Teacher-made
 Also called non-
standardized, usually
intended for classroom
assessment.
 Are used for classroom
purposes, such as
determining whether
learners have reached the
learning target.
 Examples: Formative and
summative assessments
b. Standardized
 Have fixed directions for
administering, scoring, and
interpreting results.
 Can be purchased with test
manuals, booklets, and
answer sheets.
 Sampled on a large number of
target groups called the norm.
4. Kind of Learning
a. Achievement
 Measure what learners
have learned after
instruction or after going
through a specific curricular
program.
 Measure of what a person
has learned within or up to
a given time.
 Measure the accomplished
skills and indicate what a
person can do at present.
b. Aptitude
 Aptitudes are characteristics that
influence a person’s behavior that
aid goal attainment in a particular
situation.
 Refers to the degree of readiness
to learn and perform well in a
particular situation or domain.
 Examples; Ability to comprehend
instructions, manage one’s time,
use acquired knowledge, manage
emotions, make good
inferences/generalizations, etc.
5. Ability
a. Speed Tests
 A speed test measures how quickly
an individual can complete a given
task or set of tasks within a specific
time frame.
 The emphasis is on completing tasks
with accuracy and efficiency,
focusing on the speed of
performance.
 Examples of speed tests include
timed math drills, reading
comprehension tests with time limits,
and timed typing exercises.
 Speed tests are often used to assess
processing speed, reaction time, and
the ability to work efficiently under
time pressure
b. Power Tests
 A power test assesses an individual's
ability to solve complex problems or
perform tasks that may not necessarily
be constrained by time limits.
 The focus of a power test is on the
depth of understanding, problem-
solving ability, and cognitive skills
required to tackle challenging tasks.
 Examples of power tests include
standardized tests like the SAT or
GRE, which include questions of
varying difficulty levels and allow test-
takers to spend more time on each
question.
 Power tests aim to measure intellectual
capacity, reasoning ability, and the
application of knowledge in more open-
ended contexts.
6. Interpretation of Learning
a. Norm-referenced
 Purpose: Norm-referenced tests are
designed to compare an individual's
performance against a group of peers
or a "norming group."
 Interpretation: Scores are reported in
percentiles, which indicate where an
individual's performance ranks relative
to others in the norming group.
 Focus: The focus is on relative standing
and ranking among test takers rather
than mastery of specific content.
 Examples of norm-referenced tests
include standardized achievement tests
like the Scholastic Aptitude Test (SAT),
American College Testing (ACT), and
intelligence quotient (IQ) tests.
b. Criterion-referenced
 Purpose: Criterion-referenced tests
aim to evaluate an individual's level of
mastery or proficiency in specific
content areas or skills.
 Interpretation: Scores indicate the
degree to which the individual has met
predetermined criteria or standards.
 Focus: The focus is on whether the
individual has achieved a certain level
of proficiency or mastery in the content
area being assessed.
 Examples: State standardized tests,
licensure exams, proficiency exams,
mastery tests.
ACTIVITY 4: (TRUE OR
FALSE)
1. True or False: The SAT is an example of a
psychological assessment.
2. True or False: Performance-based assessments are
always paper-and-pencil tests.
3. True or False: Teacher-made assessments are
standardized tests.
4. True or False: An achievement test measures
inherent abilities.
5. True or False: Aptitude tests assess specific skills or
knowledge.
ACTIVITY 4: (TRUE OR
FALSE)
6. True or False: Speed is a measure of how quickly
tasks can be completed.
7. True or False: Power tests focus on the depth or
complexity of problem-solving abilities.
8. True or False: Norm-referenced tests compare
individuals' performance against predetermined
standards.
9. True or False: Criterion-referenced tests determine if
individuals have met specific criteria or standards.
10. True or False: IQ tests are examples of educational
assessments.
ACTIVITY 4: (TRUE OR
FALSE)
11. True or False: A spelling test is a performance-based
assessment.
12. True or False: Standardized tests are always
teacher-made.
13. True or False: Aptitude tests measure achievement
in a specific subject area.
14. True or False: Criterion-referenced tests focus on
ranking test takers relative to each other.
15. True or False: The GRE (Graduate Record
Examination) is an example of a speed test.
ACTIVITY 4: (TRUE OR
FALSE)
16. True or False: Power tests assess the rate or
efficiency of completing tasks.
17. True or False: Psychological assessments measure
cognitive abilities and emotional intelligence.
18. True or False: Teacher-made assessments are
designed to compare individuals' performance against
each other.
19. True or False: Criterion-referenced tests are used to
determine if individuals have achieved specific
standards.
20. True or False: The ACT (American College Testing)
is a norm-referenced test.
LET’S CHECK!
1. True or False: The SAT is an example of a
psychological assessment.
2. True or False: Performance-based
assessments are always paper-and-pencil tests.
3. True or False: Teacher-made assessments are
standardized tests.
4. True or False: An achievement test measures
inherent abilities.
5. True or False: Aptitude tests assess specific
skills or knowledge.
F
F
F
F
F
LET’S CHECK!
6. True or False: Speed is a measure of how
quickly tasks can be completed.
7. True or False: Power tests focus on the depth
or complexity of problem-solving abilities.
8. True or False: Norm-referenced tests compare
individuals' performance against predetermined
standards.
9. True or False: Criterion-referenced tests
determine if individuals have met specific criteria
or standards.
10. True or False: IQ tests are examples of
educational assessments.
T
T
T
T
F
LET’S CHECK!
11. True or False: A spelling test is a performance-
based assessment.
12. True or False: Standardized tests are always
teacher-made.
13. True or False: Aptitude tests measure
achievement in a specific subject area.
14. True or False: Criterion-referenced tests focus
on ranking test takers relative to each other.
15. True or False: The GRE (Graduate Record
Examination) is an example of a speed test.
T
F
F
F
F
LET’S CHECK!
16. True or False: Power tests assess the rate or
efficiency of completing tasks.
17. True or False: Psychological assessments
measure cognitive abilities and emotional
intelligence.
18. True or False: Teacher-made assessments are
designed to compare individuals' performance
against each other.
19. True or False: Criterion-referenced tests are
used to determine if individuals have achieved
specific standards.
20. True or False: The ACT (American College
Testing) is a norm-referenced test.
F
T
F
T
T
THANK YOU!
LESSON 4 :
Planning a Written
Test (2 Weeks)
LEARNING OUTCOMES:
In this lesson, you are expected to:
set appropriate instructional
objectives for a written test and
prepare a table of specifications
for a written test.
Why do you need to define the test
objectives or learning outcomes
targeted for assessment?
 Clear articulation of learning outcomes are
primary consideration in lesson planning
because it serves as the basis for evaluating the
effectiveness of the teaching and learning
process determined through testing or
assessment.
 Learning objectives/outcomes are measurable
statements that articulate, at the beginning
course, what students should know and be able
to do or value as a result of taking the course.
Why do you need to define the test
objectives or learning outcomes
targeted for assessment?
 These learning goals provide the rationale for
the curriculum and instruction.
They provide teachers with the focus and
direction on how the course is to be handled,
particularly in terms of course content,
instruction, and assessment, and provide
students with the reasons and motivation to
study and persevere.
 Setting objectives for assessment is the process
of establishing direction to guide both the
teacher in teaching and the student in learning.
What are the objectives for testing?
 In developing a written test, the cognitive
behaviors of learning outcomes are usually
targeted.
 For the cognitive domain, it is important to
identify the levels of behavior expected from
students.
What is a table of specifications?
 Sometimes called a test blueprint.
 A tool used by teachers to design a test.
 It is a table that maps out the test
objectives, contents, or topics covered by
the test, the levels of cognitive behavior to
be measured, the distribution of items, the
number, placement, and weights of test
items, and the test format.
 Generally, TOS is prepared before a test is
created.
Why is TOS important?
 Ensures that the instructional objectives and what
the test captures match.
 Ensures that the test developer will not overlook
details that are considered essential to a good
test.
 Makes developing a test easier and more
efficient.
 Ensures that the test will sample all-important
content areas and processes.
 Is useful in planning and organizing.
 Offers an opportunity for teachers and students to
clarify achievement expectations.
SAMPLE TOS
What are the general steps in
developing a TOS?
1. Determine the objectives of a test. For
a written test, you can consider cognitive
objectives, ranging from remembering to
creating ideas, that could be measured
using common formats for testing.
2. Determine the coverage of the test.
Only topics or contents that have been
discussed in class and are relevant should
be included in the test.
What are the general steps in
developing a TOS?
3. Calculate the weight for each topic.
Topics/Learning Content Time Spent (No. of
Hours)
Percentage of Time
Theories and Concepts 30 MINUTES 10 %
Psychoanalytic Theories 90 MINUTES 30%
Trait Theories 60 MINUTES 20%
Humanistic Theories 30 MINUTES 10%
Cognitive Theories 30 MINUTES 10%
Behavioral Theories 30 MINUTES 10%
Social Learning Theories 30 MINUTES 10%
TOTAL 300 MINUTES (5 hrs) 100%
What are the general steps in
developing a TOS?
4. Determine the number of items for the
whole test.
 To determine the number of items to be included in
the test, the amount of time needed to answer the
items is considered.
 General rule: Students are given 30-60 seconds for
each item in test formats with choices, hence, for a
1-hour class, the test should not exceed 60 items.
 However, because you need also to give time for
distribution etc., the number of items should be less
or maybe just 50 items.
What are the general steps in
developing a TOS?
5. Determine the number of items per
topic.
Topics/Learning Content Percentage of Time No. of Items
Theories and Concepts 10 % 5
Psychoanalytic Theories 30% 15
Trait Theories 20% 10
Humanistic Theories 10% 5
Cognitive Theories 10% 5
Behavioral Theories 10% 5
Social Learning Theories 10% 5
TOTAL 100% 50 items
SAMPLE TOS
What are the different formats
of a TOS?
1. One-Way TOS
 Maps out the content or topic, test objectives,
number of hours spent, and format, number, and
placement of items.
 This type of TOS is easy to develop and use
because it just works around the objectives without
considering the different levels of cognitive
behaviors.
What are the different formats
of a TOS?
Sample of One-Way TOS
What are the different formats
of a TOS?
2. Two-Way TOS
 Reflects not only content, time spent, test content,
and number of items but also the levels of cognitive
behavior targeted per test content based on the
theory behind cognitive testing.
 One advantage is that it allows one to see the
levels of cognitive skills and dimensions of
knowledge that are emphasized by the test.
 Also shows the framework of assessment used in
the development of the test.
What are the different formats
of a TOS?
Sample of Two-Way TOS
What are the different formats
of a TOS?
3. Three-Way TOS
 Reflects the features of one-way and two-way TOS.
 One advantage is that it challenges the test writer to
classify objectives based on the theory behind the
assessment.
 It also shows the variability of thinking skills
targeted by the test.
 However, it takes much longer to develop this type
of TOS.
What are the different formats
of a TOS?
Sample of Three-Way TOS
ACTIVITY 5: Experiential
Learning (TOS Makin)
1. Identify a subject in Elementary from Grades 1-6.
2. Ask for the Syllabus of the chosen subject to the
subject teacher.
3. Using the TOS format of DMMMSU, create a TOS for
the third or fourth quarter depending on the subject
teacher’s choice.
4. Submit the created TOS to the subject teacher for
corrections and scoring!
5. Submit the created TOS and the result of the
evaluation to me.
Rubrics for the TOS!
1 Point 2 Points 3 Points 4 Points 5 Points
1. Clarity of
Objectives
Objectives are
unclear and
misaligned.
Some objectives are
unclear or
misaligned.
Most objectives are
clear and aligned.
Objectives are clearly
defined and well-
aligned.
Objectives are
exceptionally clear
and perfectly
aligned.
2. Coverage of
Content
Major content areas
are missing or
incomplete.
Some major content
areas are missing.
Most major content
areas are covered.
All major content
areas are covered.
All major content
areas are
comprehensively
covered.
3. Depth of Content
Content depth is
lacking and
superficial.
Content depth is
inconsistent.
Content depth is
adequate.
Content depth is
substantial.
Content depth is
profound and
comprehensive.
4. Balance of
Cognitive Levels
Imbalance in
cognitive levels.
Some imbalance in
cognitive levels.
Adequate balance of
cognitive levels.
Good balance of
cognitive levels.
Excellent balance of
cognitive levels.
5. Weightage of
Objectives
Objectives are poorly
weighted.
Some objectives are
inaccurately
weighted.
Objectives are
adequately
weighted.
Objectives are well-
weighted.
Objectives are
perfectly weighted.
6. Clarity of
Assessment
Methods
Assessment
methods are unclear.
Some assessment
methods are unclear.
Assessment
methods are mostly
clear.
Assessment
methods are clear
and concise.
Assessment
methods are
exceptionally clear.
7. Format and
Organization
Poorly organized
and difficult to
follow.
Somewhat
disorganized and
challenging to
follow.
Mostly well-
organized and easy
to follow.
Well-organized and
easy to follow.
Exceptionally well-
organized and easy
to follow.
8. Adaptability and
Flexibility
Table is not
adaptable or flexible.
Table has limited
adaptability or
flexibility.
Table is somewhat
adaptable and
flexible.
Table is adaptable
and flexible.
Table is highly
adaptable and
flexible.
THANK YOU!
LESSON 5:
Construction of
Written Tests (2
Weeks)
LEARNING OUTCOMES:
1. Identify the appropriate test
format to measure learning
outcomes, and
2. Apply the general guidelines
in constructing test items for
different test formats.
What are the general guidelines for
choosing the appropriate test format?
1. What are the objectives or desired learning outcomes
of the subject/unit/lesson being assessed?
2. What level of thinking is to be assessed
(remembering, understanding, applying, analyzing,
evaluating, and creating?) Does the cognitive level of the
test question match your instructional objectives?
Note: R and U-use selected-response formats
HOS-use constructed response formats
3. Is the test matched or aligned with the course’s DLOs
and the course contents or learning activities?
4. Are the test items realistic to the students?
What are the major categories and
formats of traditional tests?
1. Selected-
Response Tests
2. Constructed-
Response Tests
 Require learners to choose
the correct answers or best
alternative from several
choices.
 While they cover a wide
range of learning materials
very efficiently and measure
a variety of learning
outcomes, they are limited
when assessing learning
outcomes that involve more
complex and higher thinking
skills.
 Require learners to supply
answers to a given question
or problem.
What are examples of
selected-responses tests?
Test Description
1. Multiple
Choice Test
Is the most commonly used format in formal
testing and typically consists of a stem
(problem) one correct or best alternative
(correct answer), and 3 or more incorrect or
inferior alternatives (distractors).
2. True or False
or Alternative
Response Test
It generally consists of a statement and
deciding if the statement is true
(accurate/correct) or false
(inaccurate/incorrect).
3. Matching
Types
It consists of two sets of items to be
matched with each other based on a
specified attribute.
What are examples of
constructed-response tests?
Test Description
1. Short Answer
Test
It consist of open-ended questions or incomplete
sentences that require learners to create an
answer for each item, which is typically a single
word or short phrase (completion, identification,
enumeration).
2. Essay Test It consists of problems/questions that require
learners to compose or construct written
responses, usually long ones with several
paragraphs.
3. Problem-
Solving Test
It consists of problems/questions that require
learners to solve problems in quantitative or non-
quantitative settings using knowledge and skills in
mathematical concepts and procedures, and/or
other HOT skills (reasoning, analysis, critical
thinking).
General Guidelines for Writing
Multiple Choice Test Items
Content
1. Write items that reflect only one specific content and
cognitive processing skills.
2. Do not lift and use statements from the textbook or
other learning materials as test questions.
3. Keep the vocabulary simple and understandable
based on the level of learners/examinees.
4. Edit and proofread the items for grammatical and
spelling before administering them to the learners.
General Guidelines for Writing
Multiple Choice Test Items
STEM (Problem)
1. Write the directions in the stem clearly and
understandably.
2. Write stems that are consistent in form and structure,
that is present all items either in question form or in
descriptive or declarative form.
Faulty:
1. Who was the Philippine president during Martial Law?
2. The first president of the Commonwealth of the
Philippines was?
General Guidelines for Writing
Multiple Choice Test Items
STEM (Problem)
3. Word the stem positively and avoid double
negatives, such as NOT and EXCEPT in a
stem. If a negative word is necessary, underline
or capitalize the word emphasis.
Faulty: Which of the following is not a measure
of variability?
Good: Which of the following is NOT a measure
of variability?
General Guidelines for Writing
Multiple Choice Test Items
STEM (Problem)
4. Refrain from making the stem too wordy or containing
too much information unless the problem/question requires
the facts presented to solve the problem.
FAULTY: What does DNA stand for, and what is the
organic chemical of complex molecular structure found in
all cells and viruses and codes genetic information for the
transmission of inherited traits?
GOOD: As a chemical compound, what does DNA stand
for?
General Guidelines for Writing
Multiple Choice Test Items
OPTIONS
1. Provide three (3) to five (5) options per item, with only
one being the correct or best answer/alternative.
2. Write options that are parallel or similar in form and
length to avoid giving clues about the correct answer.
3. Place options in a logical order (e.g., alphabetical, from
shortest to longest).
4. Place correct responses randomly to avoid a discernable
pattern of correct answers.
5. Use None-of-the-above carefully and only when there is
one correct answer.
General Guidelines for Writing
Multiple Choice Test Items
OPTIONS
6. Avoid All of the Above as an option, especially if it is
intended to be the correct answer.
7. Make all options realistic and reasonable.
What are the general guidelines for
writing matching-type items?
Note: Matching type is most appropriate when you need to
measure the learners’ ability to identify the relationship or
association between similar items (parallel concepts).
1. Clearly state in the directions the basis for matching the
stimuli with the responses.
FAULTY: Match the following.
GOOD: Column I is a list of countries while Column II
presents the continent where these countries are located.
Write the letter of the continent corresponding to the
country on the line provided in Column I.
What are the general guidelines for
writing matching-type items?
2. Ensure that the stimuli are longer and the response is
shorter. (A-longer, B-shorter).
3. For each item, include only topics that are related to one
another and share the same foundation of information.
4. Make the response options short, homogeneous, and
arranged in logical order.
5. Include response options that are reasonable and
realistic and similar in length and grammatical form.
6. Provide more response options than the number of
stimuli.
What are the general guidelines for
writing True or False items?
1. Include statements that are completely true or
completely false.
2. Use simple and easy-to-understand statements.
3. Refrain from using negatives-especially double
negatives.
FAULTY: There is nothing illegal about buying goods
through the internet.
GOOD: It is legal to buy things or goods through the
internet.
4. Avoid using absolutes such as “always” and
“never”.
What are the general guidelines for
writing True or False items?
5. Express a single idea in each test item.
6. Avoid the use of unfamiliar words or vocabulary.
7. Avoid lifting statements from the textbook and other
learning materials.
What are the different variations of
True or False items.
1. T-F Correction or
Modified True-or-
False Question
2. Yes-No Variation
3. A-B Variation
 The statement is presented with a
keyword or phrase that is underlined, and
the learner has to supply the correct word
or phrase. Example: MC test is authentic.
 The learner has to choose yes or no,
rather than true or false.
Example: The following are kinds of test.
Circle YES if it is an authentic test and NO if
not. tests
 The learner has to choose A or B, rather than
True or False.
Example: Indicate which of the following are
traditional or authentic tests by circling A if it is a
traditional test and B if it is authentic.
What are the general guidelines for
writing short-answer test items?
Note: fill-in-the-blank or completion test items
1. Omit only the significant words from the statement.
FAULTY: Every atom has a central ___ called a nucleus.
GOOD: Every atom has a central core called a _____.
2. Do not omit too many words from the statement such as
that the intended meaning is lost.
3. Avoid obvious clues to the correct response.
4. Be sure that there is only one correct response.
FAULTY: The government should start using renewable
energy sources for generating electricity, such as ____.
GOOD: The government should start using renewable
sources of energy by using turbines called _____.
What are the general guidelines for
writing short-answer test items?
5. Avoid grammatical clues to the correct response.
Use: a(n)
6. If possible, put the blank at the end of a statement
rather than at the beginning.
FAULTY: ____ is the basic building block of matter.
GOOD: The basic building block of matter is ____.
What are the general guidelines for
writing essay tests?
Note:
 Essay tests are the preferred form of assessment
when teachers want to measure learners’ higher-
order thinking skills, particularly their ability to
reason, analyze, synthesize, and evaluate.
 They are most appropriate for assessing learners’ :
1. Understanding of subject-matter content,
2. Ability to reason with their knowledge of the
subject,
3. Problem-solving and decision skills because items
or situations presented in the test are authentic or
close to real life experiences.
What are the general guidelines for
writing essay tests?
1. Clearly define the intended learning outcome to be
assessed by the essay test. Use verbs such as compose,
analyze, interpret, explain, and justify, among others.
2. Refrain from using essay tests for intended learning
outcomes that are better assessed by other kinds of
assessment.
3. Clearly define and situate the task within a problem
situation as well as the type of thinking required to answer
the test.
4. Present tasks that are fair, reasonable, and realistic to the
students.
5. Be specific in the prompts about the time allotment and
criteria for grading the response.
What are two types of essay tests?
EXTENDED-RESPONSE
Requires much longer
and more complex
responses
Imagine you are a superhero helping the
environment. Describe three things you
would do to save nature in your
neighborhood. Explain why each action is
important and how it helps plants, animals,
and people. Give specific examples.
RESTRICTED-RESPONSE
Is much more focused
and restrained
Think about a time you helped a
friend or a friend helped you.
Describe the situation and how
you felt. What did you learn about
friendship and kindness? Give
examples.
What are the general guidelines for
writing problem-solving tests?
1. Identify and explain the problem
clearly.
2. Be specific and clear about the type
of response required from the
students.
3. Specify in the directions the basis for
grading students’
answers/procedures.
What are the different variations of
quantitative problem-solving items?
1. One answer
choice
2. All possible
answer choices
3. Type-in answer
 This type of question contains 4/5 options,
and students are required to choose the
best answer. EXAMPLE: What is the
mean of the following score distribution:
32, 44, 56, 69?
 This type of question has 4/5 options and
students are required to choose all of the
correct options. EXAMPLE: Which of the
is/are the correct measure/s of central
tendency? Indicate all possible answers.
 This type of question does not provide options to
choose from. Instead, the learners are asked to
supply the correct answer. EXAMPLE: Compute the
mean of the following score distribution: 32, 44, and
56. Indicate your answer in the blank provided.
ACTIVITY 6: Experiential
Learning (Writing a Test)
1. Review the Table of Specifications (TOS):
 Carefully review the TOS provided by your instructor for the subject
and grading period.
 Understand the objectives, content areas, cognitive levels, and
assessment methods outlined in the TOS.
2. Identify Key Objectives and Content Areas:
 Identify the key objectives and content areas specified in the TOS.
 Understand the depth and breadth of knowledge expected for each
objective.
3. Determine Assessment Methods:
 Pay attention to the assessment methods specified in the TOS for
each objective or content area.
 Understand the types of questions or tasks that will be used to
assess your understanding.
ACTIVITY 6: Experiential
Learning (Writing a Test)
4. Craft Test Questions or Tasks:
 Based on the objectives and content areas outlined in the TOS, craft
test questions or tasks that align with each objective.
 Ensure that the questions or tasks address the cognitive levels
specified in the TOS.
5. Distribute Questions or Tasks Evenly:
 Distribute the questions or tasks evenly across the content areas
and cognitive levels specified in the TOS.
 Ensure a balanced representation of different types of questions or
tasks (e.g., multiple choice, short answer, essay) if applicable.
6. Consider Time and Resources:
 Consider the time allocated for the test and the resources available
for assessment.
 Ensure that the test can be completed within the allotted time frame
and with the available resources.
ACTIVITY 6: Experiential
Learning (Writing a Test)
7. Ensure Clarity and Fairness:
 Ensure that test questions or tasks are clear, concise, and free of
ambiguity.
 Avoid biased language or content that may disadvantage certain groups
of students.
8. Review, Seek Feedback and Revise:
 Review the test questions or tasks to ensure alignment with the TOS
and clarity of assessment.
 Use feedback to refine and improve the test before finalizing it for
administration.
 Revise as needed to address any gaps or inconsistencies identified
during the review process.
9. Finalize the Test:
 Make any final adjustments or revisions based on feedback received.
 Ensure that the test is formatted and organized in a clear and
accessible manner for students.
RUBRIC IN GRADING THE TEST
Criteria 1 Point 2 Points 3 Points 4 Points 5 Points
1. Alignment with TOS
Test objectives and content
poorly align with TOS.
Some alignment with TOS
objectives and content.
Adequate alignment with TOS
objectives and content.
Good alignment with TOS
objectives and content.
Excellent alignment with TOS
objectives and content.
2. Clarity and
Understandability
Test questions/tasks are
confusing and unclear.
Some questions/tasks are
unclear.
Most questions/tasks are clear
and understandable.
Questions/tasks are clear and
understandable.
Questions/tasks are
exceptionally clear and
understandable.
3. Coverage of Content
Key content areas are poorly
covered or omitted.
Some key content areas are
inadequately covered.
Most key content areas are
adequately covered.
All key content areas are well-
covered.
All key content areas are
comprehensively covered.
4. Cognitive Level
Representation
Few questions/tasks align with
specified cognitive levels.
Some questions/tasks align with
specified cognitive levels.
Most questions/tasks align with
specified cognitive levels.
Questions/tasks align well with
specified cognitive levels.
Questions/tasks align perfectly
with specified cognitive levels.
5. Variety of Question Types
Limited variety of question
types. Some variety of question types.
Adequate variety of question
types. Good variety of question types.
Excellent variety of question
types.
6. Clarity of Instructions
Instructions are unclear and
confusing. Some instructions are unclear.
Most instructions are clear and
understandable.
Instructions are clear and
understandable.
Instructions are exceptionally
clear and understandable.
7. Organization and Format
Test lacks organization and
proper formatting.
Some organization and
formatting issues present.
Test is mostly well-organized
and formatted.
Test is well-organized and
formatted.
Test is exceptionally well-
organized and formatted.
8. Accuracy of Content
Content is inaccurate or
misleading.
Some inaccuracies or
inconsistencies in content.
Content is mostly accurate and
consistent.
Content is accurate and
consistent.
Content is exceptionally
accurate and consistent.
9. Revision and Improvement
Test lacks opportunities for
revision and improvement.
Limited opportunities for
revision and improvement.
Some opportunities for revision
and improvement.
Adequate opportunities for
revision and improvement.
Comprehensive opportunities
for revision and improvement.
10. Adherence to Assessment
Policies
Test does not adhere to
assessment policies.
Limited adherence to
assessment policies.
Mostly adheres to assessment
policies.
Adheres to assessment
policies.
Fully adheres to assessment
policies.
THANK YOU!
FINAL TERM!!!
LESSON 6:
Establishing Test
Validity and Reliability
What is test validity?
 A measure is valid when it measure what it is
supposed to measure.
 If a quarterly exam is valid, then the contents
should directly measure the objectives of the
curriculum.
What are the different ways to
establish test validity?
TYPE OF
VALIDITY
DEFINITION PROCEDURE
1. CONTENT
VALIDITY
When the items
represent the domain
being measured.
The items are compared with
the objectives of the program.
The items need to measure
directly the objectives or
definition. A reviewer
conducts the checking.
2. FACE
VALIDITY
When the test is
presented well, free of
errors, and administered
well.
The test items and layout are
reviewed and tried out on a
small group of respondents. A
manual for administration can
be made as a guide for the
test administrator.
What are the different ways to
establish test validity?
TYPE OF
VALIDITY
DEFINITION PROCEDURE
3.
PREDICTIVE
VALIDITY
A measure should
predict a future criterion.
Example is an entrance
exam predicting the
grades of the students
after the first semester.
A correlation coefficient is
obtained where the X-variable
is used as the predictor and
the Y-variable as the criterion.
4.
CONSTRUC
T VALIDITY
The components or
factors of the test should
contain items that are
strongly correlated.
The Pearson r can be used to
correlate the items for each
factors. However, there is a
technique called factor
analysis to determine which
items are highly correlated to
form a factor.
What are the different ways to
establish test validity?
TYPE OF
VALIDITY
DEFINITION PROCEDURE
5.
CONCURREN
T VALIDITY
When two or more
measures are present for
each examinee that
measure the same
characteristic.
The scores on the measures
should be correlated.
6.
CONVERGEN
T VALIDITY
When the components of
factors of a test are
hypothesized to have a
positive correlation.
Correlation is done for the factors
of the test.
7. DIVERGENT
VALIDITY
When the components or
factors of a test are
hypothesized to have a
negative correlation. An
example to correlate are the
scores in a test on intrinsic
Correlation is done for the factors of
the test.
What is test reliability?
 Reliability is the consistency of the responses
to measure under three conditions: (1) when
retested on the same person; (2) when
retested on the same measure; and (3)
similarity of responses across items that
measure the same characteristics.
What are the different factors that
affect the reliability measure?
1. The number of items in a test- The more items a test has,
the likelihood of reliability is high. The probability of obtaining
consistent scores is high because of the large pool of items.
2. Individual differences of participants- Every participant
possesses characteristics that affect their performance in a test,
such as fatigue, concentration, innate ability, perseverance, and
motivation. These individual factors change over time and affect
the consistency of the answers in a test.
3. External environment- The external environment may
include room temperature, noise level, depth of instruction,
exposure to materials, and quality of instruction, which could
affect changes in the responses of examinees in a test.
What are the different ways to
establish test reliability?
METHOD OF
TESTING
PROCEDURE STATISTICAL TOOL TO BE
USED
1. Test-
Retest
 Using Pre-Test and
Post-test.
 Time interval of a
minimum of 30
minutes and
maximum of 6
months.
 Applicable for tests
that measure stable
variables, such as
aptitude and
psychomotor
measures.
 Correlate the scores
using Pearson Product
Moment Correlation of
Pearson r.
 Significant and positive
correlation indicates that
the test has temporal
stability overtime.
What are the different ways to
establish test reliability?
METHOD OF
TESTING
PROCEDURE STATISTICAL TOOL TO BE
USED
2. Parallel
Forms
 There are two
versions of the test.
 Administer one form
at one time and the
other form to
another time to the
same group.
 Done when test is
repeatedly used for
different groups
such as entrance
and licensure
exams.
 Correlate the scores from
the forms (test versions)
using Pearson r.
What are the different ways to
establish test reliability?
METHOD OF
TESTING
PROCEDURE STATISTICAL TOOL TO BE
USED
3. Split-Half  Administer a test to
a group of
examinees.
 Items need to be
split into halves, od-
even technique.
 Correlate the sum
scores of the odd
and even from the
examinees.
 Applicable when test
has a large number
of items.
 Correlate the scores
using Pearson r.
 After correlation, use
another formula called
Spearman-Brown
Coefficient.
 The correlation obtained
using Pearson r and
Spearman Brown should
be positive to mean that
the test has internal
consistency.
What are the different ways to
establish test reliability?
METHOD OF
TESTING
PROCEDURE STATISTICAL TOOL TO BE
USED
4. Test of
Internal
Consistency
Using Kuder-
Richardson
and
Cronbach’s
Alpha
Method
 Use to determine if
the scores for each
item are consistently
answered by the
examinees.
 After test
administration,
determine and
record the scores.
 Mostly applicable for
scales and
inventories.
 Use statistical analysis
called Cronbach’s alpha
or Kuder Richardson.
 A Cronbach’s alpha value
of 0.60 and above
indicates that the test
have internal consistency.
What are the different ways to
establish test reliability?
METHOD OF
TESTING
PROCEDURE STATISTICAL TOOL TO BE
USED
5. Inter-rater
Reliability
 Use to determine
the consistency of
multiple raters when
using rating scales
and rubrics to judge
performance.
 The reliability here
refers to the similar
or consistent ratings
provided by more
than 1 rater.
 Kendall’s rau coefficient
of concordance is used to
determine if the ratings
provided by multiple
raters agree with each
other.
 Significant Kendall’s tau
value indicates that raters
concur or agree with each
other in their ratings.
How to determine the strength
of correlation?
 The strength of correlation is determined
trough the correlation coefficient value.
0.80-1.00= Very strong relationship
0.60-0.79=Strong Relationship
0.40-0.59=Moderate/Substantial Relationship
0.20-0.39=Weak Relationship
0.00-0.19=Negligible relationship
How to determine if an item is
easy or difficult?
 An item is difficult if majority of students are
unable to provide the correct answer.
 An item is easy if majority of the students are
able to answer correctly.
 An item can discriminate if the examinees
who score high in the test can answer more
the items correctly that examinees who got
low scores.
How to determine if an item is
easy or difficult?
1. Get the total score of each students and
arrange scores from highest to lowest.
Item 1 Item 2 Item 3 Item 4 Item 5
S1 X X C C C
S2 C C C X C
S3 X X X C C
S4 X X X X C
S5 X C C C C
S6 C X C C 0
S7 X X C C X
S8 X C C X X
S9 C X C C C
S10 C X C C X
How to determine if an item is
easy or difficult?
1. Get the total score of each students and
arrange scores from highest to lowest.
I1 I2 I3 I4 I5 TOTAL
SCORE
S2 C C C X C 4
S5 X C C C C 4
S9 C X C C C 4
S1 X X C C C 3
S6 C X C C X 3
S10 C X C C X 3
S3 X X X C C 2
S7 X X C C X 2
S8 X C C X X 2
S4 X X X X C 1
How to determine if an item is
easy or difficult?
2. Obtain the upper and lower 27% of the
group.
 Multiply 0.27 by the total number of
students. 2.7
 The round the number value. 3
 Get the top 3 students and the bottom 3
students based on the total scores.
TOP 3: S2, S5, S9
BOTTOM 3: S7, S8, S4
How to determine if an item is
easy or difficult?
3. Obtain the proportion correct for
each item.
 This is computed for the upper 27%
group and the lower 27% group.
 This is done by summating the
correct answer per item and dividing
it by the total number of students,
How to determine if an item is
easy or difficult?
I1 I2 I3 I4 I5 TOTAL
SCORE
S2 C C C X C 4
S5 X C C C C 4
S9 C X C C C 4
TOTAL 2 2 3 2 3
P OF HG 0.67 0.67 1 0.67 1 Divide
by 3
S7 X X C C X 2
S8 X C C X X 2
S4 X X X X C 1
TOTAL 0 1 2 1 1
P OF LG 0 0.33 0.67 0.33 0.33 Divide
by 3
How to determine if an item is
easy or difficult?
4. The item difficulty is obtained using
the following formula:
pH + pL
Item difficulty = 2
Difficulty Index Remark
0.76 or higher Easy Item
0.25 to 0.75 Average Item
0.24 or lower Difficult Item
How to determine if an item is
easy or difficult?
4. The item difficulty is obtained using
the following formula:
I1 I2 I3 I4 I5
Index of
difficult
y
0.33 0.50 0.83 0.50 0.67
Remark Difficult Average Easy Average Average
How to determine if an item is
easy or difficult?
5. The index of discrimination is
obtained using the formula:
Item discrimination = pH-pL
Index discrimination Remark
0.40 and above Very good item
0.30-0.39 Good item
0.20-0.29 Reasonably Good Item
0.10-0.19 Marginal Item
Below 0.10 Poor Item
How to determine if an item is
easy or difficult?
5. The index of discrimination is
obtained using the formula:
Item discrimination = pH-pL
I1 I2 I3 I4 I5
0.67-0 0.67-0.33 1.00-0.67 0.67-0.33 1.00-0.33
Index of
difficulty
0.67 0.34 0.33 0.34 0.67
Remark Very
Good
Item
Good
Item
Good
Item
Good
Item
Very
Good
Item
ACTIVITY 1: Determine the difficulty
and discrimination index of the
following items:
Item 1 Item 2 Item 3 Item 4 Item 5
S1 C C C C C
S2 C C C X C
S3 X X X X X
S4 X X X X C
S5 X C C C C
S6 C X C C 0
S7 X X C C X
S8 X C C X X
S9 C X X X X
S10 C X C C X
ACTIVITY 1: ANSWER
Item 1 Item 2 Item 3 Item 4 Item 5
S1 C C C C C
S2 C C C X C
S5 X C C C C
.67 1 1 .67 1
S7 X X C C X
S4 X X X X C
S9 C X X X X
.33 0 .33 .33 .33
Difficulty .50 (Ave) .50 (Ave) .67 (Ave) .50 (Ave) .67 (Ave)
Discrimina
tion
.34 (Good) 1 (Very
Good)
.67 (Very
Good)
.34 (Good) .67 (Very
Good)
THANK YOU!
LESSON 7:
Organization of Test
Data Using Tables and
Graphs
How do we present test data
graphically?
1. Histogram
 Histogram is a type of graph appropriate for
quantitative data such as test scores.
 This graph consists of columns-each has a
base that represents one class interval, and
its height represents the number of frequency
in the class interval.
How do we present test data
graphically?
2. Frequency Polygon
 Also used for quantitative data, and it is one
of the most commonly used methods in
presenting test scores.
 It is very similar to histogram, but instead of
bars, it uses lines to compare sets of test
data in the same axes.
How do we present test data
graphically?
3. Cumulative Frequency Polygon

ASSESSMENT IN LEARNING 1-LESSONS 1-4 (1).ppt

  • 1.
    ASSESSMENT IN LEARNING 1 OSCARO. ANCHETA JR. Instructor
  • 2.
  • 3.
  • 4.
  • 5.
    Choose a wordfrom the Word Cloud and explain its relevance/relationship to assessment!
  • 6.
    LESSON 1: Basic Conceptsand Principles in Assessing Learning
  • 7.
    LEARNING OUTCOMES: In thislesson, you are expected to: describe assessment in learning and related concepts and, demonstrate understanding of the different principles in assessing learning.
  • 8.
    What is assessment? Rooted in the Latin word assidere, which means “to sit beside another.”  Generally defined as the process of gathering quantitative and/or qualitative data for the purpose of making decisions.
  • 9.
    What is Assessmentin Learning?  A systematic and purpose-oriented collection, analysis, and interpretation of evidence of student learning in order to make informed decisions relevant to the learners.  It aims to use evidence on student learning to further promote and manage learning.  Assessment in learning can be characterized as (a) a process, (b) based on specific objectives, and (c) from multiple sources.
  • 10.
    What is measurement? The process of quantifying the attributes of an object What is evaluation?  Refers to the process of making value judgements on the information collected from measurement based on specified criteria.
  • 11.
    What is Testing? Testing is the most common form of assessment.  Refers to the use of a test or battery of tests to collect information on student learning over a specific period of time.  Can be categorized as either a selected response (objective format) or constructed response (subjective format).
  • 12.
    What is thesignificance of TOS to the test?  The Table of Specification (TOS) maps out the essential aspects of a test (test objective, contents, topics, item distribution).  Is used in the design and development of tests.
  • 13.
    When is atest considered to be good and effective?  If it has acceptable psychometric properties.  This means that a test should be valid, reliable, has acceptable level of difficulty, and can discriminate between learners with higher and lower ability.
  • 14.
    What is grading? The process of assigning value to the performance or achievement of a learner based on specified criteria or standards.  Grades can be based from, recitation, seatwork, homework, projects, and tests.  Grading is a form of evaluation which provides information whether a learners passed or failed a subject or a particular assessment task.
  • 15.
    What are thedifferent measurement framework used in assessment? 1. Classical Test Theory (CTT) 2. Item Response Theory (IRT)
  • 16.
    What is ClassicalTest Theory?  Known as true score theory, this explains that variations in the performance of examinees on a given measure is due to the variations in their abilities.  Assumes that all measures are imperfect (affected by internal and external conditions.  Provides an estimation of the item difficulty based on the frequency or number of examinees who correctly answer a particular item.  Provides an estimation of item discrimination based on the number of examinees with a higher or lower ability to answer a particular item.
  • 17.
    What is ItemResponse Theory?  Analyzes test items by estimating the probability that an examinee answers an item correctly or incorrectly.  Assumes that the characteristics of an item can be estimated independently of the characteristics or ability of the examinee and vice versa.  Aside from item difficulty and discrimination, ITR analysis can provide fit statistics and item characteristics curve.
  • 18.
    What are thedifferent types of assessment in learning? 1. Formative Assessment 2. Summative Assessment 3. Diagnostic Assessment 4. Placement Assessment 5. Traditional Assessment 6. Authentic Assessment
  • 19.
    1. Formative Assessments Provides information to both teachers and learners on how they can improve the teaching-learning process.  Used at the beginning and during instruction for teachers to assess learner’s understanding.  Can be used to inform learners about their strengths and weaknesses to enable them to take steps to learn better and improve their performances as the class progresses.
  • 20.
    2. Summative Assessments Aims to determine learners’ mastery of content or attainment of learning outcomes.  Typically used for evaluating learners’ performance in class and providing teachers with information about the teaching effectiveness of their teaching strategies and how they can improve their instruction in the future.  Can inform learners about what they have done well and what they need to improve on in their future classes or subjects.
  • 21.
    3. Diagnostic Assessment Aims to detect the learning problems or difficulties of the learners so that corrective measures or interventions are done to ensure learning.  Done right after seeing signs of learning problems in the course of teaching.  Can also be done at the beginning of the school year for a spirally-designed curriculum.
  • 22.
    4. Placement Assessment Done at the beginning of the school year to determine what the learners already know or what are their needs that could inform the design of instruction.  Grouping of learners based on the results of the placement is done before instruction.  Example: Entrance Examination
  • 23.
    5. Traditional Assessment Refers to the use of conventional strategies/tools.  Typically used as the basis for evaluating and grading learners.  Viewed as an inauthentic type of assessment.
  • 24.
    7. Authentic Assessment Refers to the use of assessment strategies or tools that allow learners to perform or create a product that is meaningful to the learners.  The most Authentic Assessments are those that allow performance that most closely resemble real-world tasks or applications in real world settings or environments.
  • 25.
    What are thedifferent principles in assessing learning? 1. Assessment should have a clear purpose. 2. Assessment is not an end in itself. 3. Assessment is an ongoing, continuous, and a formative process. 4. Assessment is learner-centered.
  • 26.
    What are thedifferent principles in assessing learning? 5. Assessment is both process and product- oriented. 6. Assessment must be comprehensive and holistic. 7. Assessment requires the use of appropriate measures. 8. Assessment should be as authentic as possible.
  • 27.
    ACTIVITY 1: CONCEPT MAPPING DIRECTION: 1.Create a graphic organizer to summarize and encapsulate the fundamental concepts and principles involved in assessing learning. 2. Following the illustration, elucidate/explain the relationships among the concepts.
  • 28.
  • 29.
    LESSON 2: Assessment Purposes, LearningTargets, and Appropriate Methods
  • 30.
    LEARNING OUTCOMES: In thislesson, you are expected to: explain the purpose of classroom assessment and, formulate learning targets that match appropriate assessment methods.
  • 31.
    What is thepurpose of classroom assessment?  Purpose of assessment may be classified in terms of the following: 1. Assessment of Learning 2. Assessment for Learning 3. Assessment as Learning
  • 32.
    1. What isAssessment of Learning?  Refers to the use of assessment to determine learners’ acquired knowledge and skills from instruction and whether they were able to achieve the curriculum outcomes.  It’s generally summative in nature.
  • 33.
    2. What isAssessment for Learning?  Refers to the use of assessment to identify the needs of learners in order to modify instruction or learning activities in the classroom.  It is formative in nature and it is meant to identify gaps in learning experiences of learners so they can be assisted in achieving the curriculum outcomes.
  • 34.
    3. What isAssessment as Learning?  Refers to the use of assessment to help learners become self-regulated.  It is formative in nature and meant to use assessment tasks, results, and feedback to help learners practice self-regulation and make adjustments to achieve the curriculum outcomes.
  • 35.
    What are theroles of classroom assessment in the teaching-learning process? 1. Formative 2. Diagnostic 3. Evaluative 4. Facilitative 5. Motivational Focuses on acquiring information on the current status and level of learner’s knowledge and skills or competencies. Focuses on identifying specific learner’s weaknesses or difficulties that may affect their achievement. Focuses on measuring learners’ performance or achievement for the purpose of making judgement or grading in particular. Focuses on improving the teaching-learning process. Focuses on providing mechanisms for learners to be motivated and engaged in learning and achievement in the classroom. mechanisms
  • 36.
    What are LearningTargets?  Are statements on what learners are supposed to learn, and what they can do because of instruction.  Learning targets specify both the content and criteria of learning.
  • 37.
    How are Learningtargets different and related to Goals, Standards, and Objectives?  Goals are general statements about desired learner outcomes in a given year or during the duration of a program.  Standards are specific statements about what learners should know and are capable of doing at a particular grade level, subject, or course. Types: (1) Content, (2) Performance, (3) Development, and (4) Grade Level.  Objectives are specific statements of learners’ performance at the end of an instructional unit.
  • 38.
    What are the3 domains of Bloom’s Taxonomy? 1. Cognitive 2. Affective 3. Psychomotor Knowledge-based goals Skills-based goals Feelings/Emotions- based goals
  • 39.
    Bloom’s Taxonomy ofEducational Objectives in the Cognitive Domain
  • 40.
    Revised Bloom’s Taxonomyof Educational Objectives in the Cognitive Domain (Anderson & Krathwohl, 2001)
  • 42.
    Dimensions of Cognitive Process Cognitive Level DescriptionIllustrative Verbs 1. Knowledge Recall or recognition of learned materials like concepts, events, facts, ideas, and procedures. Define, recall, name, enumerate, and label 1. Remember Recognizing and recalling facts. Identify, list, name, underline, recall, retrieve, locate
  • 43.
    Dimensions of Cognitive Process Cognitive Level DescriptionIllustrative Verbs 2. Comprehen sion Understand the meaning of learned material, including interpretation, explanation, and literal translation. Explain, describe, summarize, discusses, and translates 2. Understand Understanding what the information means. Describe, determine, interpret, explain, translate, and paraphrase
  • 44.
    Dimensions of Cognitive Process Cognitive Level DescriptionIllustrative Verbs 3. Application Use of abstract ideas, principles, or methods to specific concrete situations. Apply, demonstrate, produce, illustrate, and use. 3. Apply Applying the facts, rules, concepts, and ideas in another context. Apply, employ, practice, relate, use, implement, carry out, and solve.
  • 45.
    Dimensions of Cognitive Process Cognitive Level DescriptionIllustrative Verbs 4. Analysis Separation of a concept or idea into constituent parts or elements and an understanding of the nature and association among the elements. Compare, contrast, categorize, classifies, and calculates. 4. Analyze Breaking down information into parts. Analyze, calculate, examine, test, compare, differentiate, organize, and classify.
  • 46.
    Dimensions of Cognitive Process Cognitive Level DescriptionIllustrative Verbs 5. Synthesis Construction of elements or parts from different sources to form a more complex or novel structure. Compose, construct, create, design, and integrate. 6. Create Combining parts to make a whole. Compose, produce, develop, formulate, devise, prepare, design, construct, propose, and re- organize.
  • 47.
    Dimensions of Cognitive Process Cognitive Level DescriptionIllustrative Verbs 6. Evaluation Making judgment of ideas or methods based on sound and established criteria. Appraise, evaluate, judge, conclude, and criticize. 5. Evaluate Judging the value of information or data. Assess, measure, estimate, evaluate, critique, and judge.
  • 48.
    Knowledge Dimensions KNOWLEDGE DESCRIPTIONSAMPLE QUESTION 1. Factual This is basic in discipline. It tells the facts or bits of information. This type of knowledge usually answers questions that begin with who, where, what, and when. What is the capital city of the Philippines? 2. Conceptual It tells the concepts, generalizations, principles, theories, and models that one needs to know in a discipline. Usually, answers questions that begin with what? What makes the Philippines the “Peal of the orient sea”? 3. Procedural It tells the processes, steps, techniques, methodologies, or specific skills needed in performing a specific task. Usually answers questions that begin with how. How do we develop items for an achievement test? 4. Metacognitive It makes one understand the value of learning in one’s life. It requires reflective knowledge and strategies on how to solve problems or perform a task through understanding oneself or context. Why is teaching the most suitable course for you?
  • 49.
    Types of LearningTargets 1. Knowledge targets Refers to the factual, conceptual, and procedural information that learners must learn in a subject or content area. Knowledge-based thought processes that learners must learn. It involves application of knowledge in problem-solving, decision-making, and other tasks that require mental skills. Use of knowledge and/or reason to perform or demonstrate physical skills. Use of knowledge, reasoning, and skills in creating a concrete or tangible product. 2. Reasoning targets 3. Skills targets 4. Product targets
  • 50.
    Sample Learning Targets ObjectiveLearning Targets At the end of the lesson, the students should be able to demonstrate their ability to write the literature review section of a thesis proposal. K- “I can explain the principles in writing the literature review of a thesis proposal.” R-”I can argue the significance of my thesis through literature review.” S-I can search and organize related literature from various sources.” P-I can write effective literature review section of a thesis proposal.”
  • 51.
    What is therelationship of Learning Targets to assessment? 1. Clarity of Expectations: Learning targets provide clear statements of what students are expected to know, understand, and be able to do by the end of a lesson, unit, or course. Assessment measures whether students have achieved these targets. 2. Alignment: Learning targets should be aligned with curriculum standards, instructional objectives, and assessment criteria. Assessments should directly reflect the learning targets to ensure that they effectively measure student attainment of the intended knowledge and skills. 3. Assessment Design: Learning targets guide the design of assessments. Educators develop assessment tasks, questions, and rubrics based on the specific learning targets to be assessed. This alignment ensures that assessments are meaningful and relevant to the learning objectives.
  • 52.
    What is therelationship of Learning Targets to assessment? 4. Feedback and Progress Monitoring: Assessments provide valuable feedback to both students and teachers regarding student understanding and progress toward achieving the learning targets. Through assessment results, teachers can identify areas of strength and areas needing improvement, while students can gauge their own learning and identify areas for growth. 5. Differentiation and Personalization: Learning targets help teachers differentiate instruction to meet the diverse needs of students. Similarly, assessments can be designed to provide opportunities for students to demonstrate their understanding in various ways, accommodating different learning styles and preferences. 6. Goal Setting and Reflection: Learning targets provide a basis for setting learning goals and objectives. Assessment results inform students and teachers about progress toward these goals, prompting reflection on learning strategies and areas for further development.
  • 53.
    What is therelationship of Learning Targets to assessment? 7. Instructional Planning: Learning targets guide instructional planning by informing the selection of teaching strategies, resources, and activities that best support student attainment of the desired learning outcomes. Assessment data also inform instructional decision- making, allowing teachers to adjust their approaches based on student needs. IN SHORT, learning targets GUIDE teachers in selecting appropriate assessment methods in learning.
  • 54.
    Matching Learning Targetswith Paper- and Pencil Types of Assessment Learning Targets Selected Response Constructed Response MC T/F MT SA PS Essay Knowledge AAA AAA AAA AAA AAA AAA Reasoning AA A A A AAA AAA Skills A A A A AA AA Product A A A A A A NOTE: MC-Matching Type, T/F-True or False, MT- Matching Type, SA-Short Answer, PS-Problem Solving, MORE “A” MEAN BETTER MATCHES.
  • 55.
    Matching Learning Targetswith Other Types of Assessment Learning Targets PB TP R O Knowledge A AAA AAA AA Reasoning AA AA AAA AA Skills AA AAA A AA Product AAA AAA A A NOTE: PB-Project-Based, P-Portfolio, R-Recitation, O-Observation, MORE “A” MEAN BETTER MATCHES.
  • 56.
    Activity 2: CaseStudy Analysis: “Assessing Learning in a Third-Grade Mathematics Class” Background: Mrs. Thompson teaches third-grade mathematics at Oakridge Elementary School. She is in the midst of a unit on multiplication and division, and she wants to assess her student’s understanding of these foundational concepts. Mrs. Thompson has a diverse group of 25 students in her class, each with varying levels of mathematical proficiency. Scenario: Mrs. Thompson is planning her unit assessment and wants to ensure that it accurately measures her students' mastery of multiplication and division skills. She decides to design a variety of assessment tasks to accommodate different learning styles and abilities.
  • 57.
    The assessment includesthe following components: Written Assessment: A written test consisting of a combination of multiple-choice questions, short-answer questions, and word problems related to multiplication and division. Hands-On Activities: Hands-on activities such as manipulative- based tasks and group problem-solving exercises to assess students' ability to apply multiplication and division concepts in real-world contexts. Performance Tasks: Performance tasks where students demonstrate their understanding through activities like creating arrays, solving word problems independently, and explaining their problem-solving strategies. As Mrs. Thompson reviews her assessment plan, she considers how to provide constructive feedback to her students. She wants to ensure that her feedback is supportive and encourages students to reflect on their learning progress.
  • 58.
    How to WriteYour Case Analysis
  • 59.
    ANSWER: PROBLEM STATEMENT: Mrs. Thompson,a third-grade mathematics teacher at Oakridge Elementary School, needs to design an effective assessment to measure her students' understanding of multiplication and division concepts. She faces the challenge of accommodating diverse learners while ensuring that the assessment aligns with learning objectives and provides meaningful feedback to support student learning.
  • 60.
    ANSWER: Recommendation: To address thesechallenges, Mrs. Thompson should continue with her multifaceted assessment approach, including written assessments, hands-on activities, and performance tasks. She should also implement strategies for providing personalized and constructive feedback tailored to each student's needs and performance.
  • 61.
    ANSWER: Evidence and SupportingArguments: Multifaceted Assessment Approach: Incorporating various assessment components allows Mrs. Thompson to capture a holistic view of her students' understanding of multiplication and division. Written assessments provide insight into students' conceptual understanding and procedural fluency, while hands-on activities and performance tasks assess their ability to apply these concepts in real-world contexts. Accommodating Diverse Learners: The inclusion of hands-on activities and performance tasks enables Mrs. Thompson to cater to students with diverse learning styles and abilities. Manipulative-based tasks and group problem- solving exercises offer opportunities for active engagement and promote deeper understanding among students with varying levels of mathematical proficiency. Personalized Feedback: By providing personalized feedback based on individual student performance, Mrs. Thompson can address specific strengths and areas for improvement. Constructive feedback encourages students to reflect on their learning progress and fosters a growth mindset. Additionally, offering guidance on problem-solving strategies helps students develop critical thinking skills and enhances their overall mathematical proficiency.
  • 62.
    ANSWER: Conclusion: In conclusion, Mrs.Thompson's multifaceted assessment approach, coupled with personalized feedback strategies, allows her to effectively assess her students' understanding of multiplication and division concepts in the third-grade mathematics class at Oakridge Elementary School. By accommodating diverse learners and fostering reflection and growth through constructive feedback, Mrs. Thompson promotes student engagement, learning, and development in mathematics.
  • 63.
    Activity 3: AssessmentPlan SUBJECT SPECIFIC LESSON LEARNING OBJECTIVES LEARNING TARGETS ASSESSMENT ACTIVITY/TASK Why use of this assessment? How does this assessment task/ activity improve your instruction? How does this assessment task/activity help your learners achieve the intended learning objectives?
  • 64.
  • 65.
  • 66.
    LEARNING OUTCOMES: In thislesson, you are expected to: illustrate scenarios in the use of different classifications of assessment, Rationalize the purposes of different forms of assessment, and Decide on the kind of assessment to be used.
  • 67.
    What are thedifferent classifications of assessment? CLASSIFICATION TYPE 1. PURPOSE EDUCATIONAL PSYCHOLOGICAL 2. FORM PAPER-AND-PENCIL PERFORMANCE-BASED 3. FUNCTION TEACHER-MADE STANDARDIZED 4. KIND OF LEARNING ACHIEVEMENT APTITUDE 5. ABILITY SPEED POWER 6. INTERPRETATION OF LEARNING NORM-REFERENCED CRITERION-REFERENCED
  • 68.
    1. PURPOSE a. Educational Used in a school setting for the purpose of tracking the growth of the learners and grading their performance.  Comes from formative and summative assessments to provide information about student learning.  Parallelism between tasks provided must be observed. to track the growth of the learners and grade. b. Psychological  These are tests and scales that measures and determine the learner’s cognitive and non- cognitive characteristics.  Examples: Ability, aptitude, intelligence, and critical thinking.  Affective measures for personality, motivation, attitude, interest, and disposition.  Used by GC for learners’ academic, career, and social and emotional development.
  • 69.
    2. FORM a. Paper-&-Pencil Are cognitive tasks that require a single correct answer.  Usually come in test types (T or F, Identification, MT, MC).  Pertain to a specific cognitive skill (Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating). b. Performance-based  Requires students to perform tasks (demonstrations, arrive at product, show strategies, present information).  Focuses on skills development that are complex.  Usually open-ended, and each learner arrives with various possible responses.
  • 70.
    Check your understanding! 1.Color the shapes according to the instructions. 2. Connect the dots to reveal the hidden picture. 3. Circle the word that rhymes with 'cat'. 4. Draw a line to match each number to its corresponding quantity. 5. Write the missing letter to complete the word. 6. Recite a short poem about friendship in front of the class. 7. Build a model of a simple machine using classroom materials. 8. Demonstrate how to properly wash your hands to keep germs away. 9. Role-play a scenario where you're showing kindness to a friend. 10. Present a short story you wrote about your favorite animal to the class.
  • 71.
    3. FUNCTION a. Teacher-made Also called non- standardized, usually intended for classroom assessment.  Are used for classroom purposes, such as determining whether learners have reached the learning target.  Examples: Formative and summative assessments b. Standardized  Have fixed directions for administering, scoring, and interpreting results.  Can be purchased with test manuals, booklets, and answer sheets.  Sampled on a large number of target groups called the norm.
  • 72.
    4. Kind ofLearning a. Achievement  Measure what learners have learned after instruction or after going through a specific curricular program.  Measure of what a person has learned within or up to a given time.  Measure the accomplished skills and indicate what a person can do at present. b. Aptitude  Aptitudes are characteristics that influence a person’s behavior that aid goal attainment in a particular situation.  Refers to the degree of readiness to learn and perform well in a particular situation or domain.  Examples; Ability to comprehend instructions, manage one’s time, use acquired knowledge, manage emotions, make good inferences/generalizations, etc.
  • 73.
    5. Ability a. SpeedTests  A speed test measures how quickly an individual can complete a given task or set of tasks within a specific time frame.  The emphasis is on completing tasks with accuracy and efficiency, focusing on the speed of performance.  Examples of speed tests include timed math drills, reading comprehension tests with time limits, and timed typing exercises.  Speed tests are often used to assess processing speed, reaction time, and the ability to work efficiently under time pressure b. Power Tests  A power test assesses an individual's ability to solve complex problems or perform tasks that may not necessarily be constrained by time limits.  The focus of a power test is on the depth of understanding, problem- solving ability, and cognitive skills required to tackle challenging tasks.  Examples of power tests include standardized tests like the SAT or GRE, which include questions of varying difficulty levels and allow test- takers to spend more time on each question.  Power tests aim to measure intellectual capacity, reasoning ability, and the application of knowledge in more open- ended contexts.
  • 74.
    6. Interpretation ofLearning a. Norm-referenced  Purpose: Norm-referenced tests are designed to compare an individual's performance against a group of peers or a "norming group."  Interpretation: Scores are reported in percentiles, which indicate where an individual's performance ranks relative to others in the norming group.  Focus: The focus is on relative standing and ranking among test takers rather than mastery of specific content.  Examples of norm-referenced tests include standardized achievement tests like the Scholastic Aptitude Test (SAT), American College Testing (ACT), and intelligence quotient (IQ) tests. b. Criterion-referenced  Purpose: Criterion-referenced tests aim to evaluate an individual's level of mastery or proficiency in specific content areas or skills.  Interpretation: Scores indicate the degree to which the individual has met predetermined criteria or standards.  Focus: The focus is on whether the individual has achieved a certain level of proficiency or mastery in the content area being assessed.  Examples: State standardized tests, licensure exams, proficiency exams, mastery tests.
  • 75.
    ACTIVITY 4: (TRUEOR FALSE) 1. True or False: The SAT is an example of a psychological assessment. 2. True or False: Performance-based assessments are always paper-and-pencil tests. 3. True or False: Teacher-made assessments are standardized tests. 4. True or False: An achievement test measures inherent abilities. 5. True or False: Aptitude tests assess specific skills or knowledge.
  • 76.
    ACTIVITY 4: (TRUEOR FALSE) 6. True or False: Speed is a measure of how quickly tasks can be completed. 7. True or False: Power tests focus on the depth or complexity of problem-solving abilities. 8. True or False: Norm-referenced tests compare individuals' performance against predetermined standards. 9. True or False: Criterion-referenced tests determine if individuals have met specific criteria or standards. 10. True or False: IQ tests are examples of educational assessments.
  • 77.
    ACTIVITY 4: (TRUEOR FALSE) 11. True or False: A spelling test is a performance-based assessment. 12. True or False: Standardized tests are always teacher-made. 13. True or False: Aptitude tests measure achievement in a specific subject area. 14. True or False: Criterion-referenced tests focus on ranking test takers relative to each other. 15. True or False: The GRE (Graduate Record Examination) is an example of a speed test.
  • 78.
    ACTIVITY 4: (TRUEOR FALSE) 16. True or False: Power tests assess the rate or efficiency of completing tasks. 17. True or False: Psychological assessments measure cognitive abilities and emotional intelligence. 18. True or False: Teacher-made assessments are designed to compare individuals' performance against each other. 19. True or False: Criterion-referenced tests are used to determine if individuals have achieved specific standards. 20. True or False: The ACT (American College Testing) is a norm-referenced test.
  • 79.
    LET’S CHECK! 1. Trueor False: The SAT is an example of a psychological assessment. 2. True or False: Performance-based assessments are always paper-and-pencil tests. 3. True or False: Teacher-made assessments are standardized tests. 4. True or False: An achievement test measures inherent abilities. 5. True or False: Aptitude tests assess specific skills or knowledge. F F F F F
  • 80.
    LET’S CHECK! 6. Trueor False: Speed is a measure of how quickly tasks can be completed. 7. True or False: Power tests focus on the depth or complexity of problem-solving abilities. 8. True or False: Norm-referenced tests compare individuals' performance against predetermined standards. 9. True or False: Criterion-referenced tests determine if individuals have met specific criteria or standards. 10. True or False: IQ tests are examples of educational assessments. T T T T F
  • 81.
    LET’S CHECK! 11. Trueor False: A spelling test is a performance- based assessment. 12. True or False: Standardized tests are always teacher-made. 13. True or False: Aptitude tests measure achievement in a specific subject area. 14. True or False: Criterion-referenced tests focus on ranking test takers relative to each other. 15. True or False: The GRE (Graduate Record Examination) is an example of a speed test. T F F F F
  • 82.
    LET’S CHECK! 16. Trueor False: Power tests assess the rate or efficiency of completing tasks. 17. True or False: Psychological assessments measure cognitive abilities and emotional intelligence. 18. True or False: Teacher-made assessments are designed to compare individuals' performance against each other. 19. True or False: Criterion-referenced tests are used to determine if individuals have achieved specific standards. 20. True or False: The ACT (American College Testing) is a norm-referenced test. F T F T T
  • 83.
  • 84.
    LESSON 4 : Planninga Written Test (2 Weeks)
  • 85.
    LEARNING OUTCOMES: In thislesson, you are expected to: set appropriate instructional objectives for a written test and prepare a table of specifications for a written test.
  • 86.
    Why do youneed to define the test objectives or learning outcomes targeted for assessment?  Clear articulation of learning outcomes are primary consideration in lesson planning because it serves as the basis for evaluating the effectiveness of the teaching and learning process determined through testing or assessment.  Learning objectives/outcomes are measurable statements that articulate, at the beginning course, what students should know and be able to do or value as a result of taking the course.
  • 87.
    Why do youneed to define the test objectives or learning outcomes targeted for assessment?  These learning goals provide the rationale for the curriculum and instruction. They provide teachers with the focus and direction on how the course is to be handled, particularly in terms of course content, instruction, and assessment, and provide students with the reasons and motivation to study and persevere.  Setting objectives for assessment is the process of establishing direction to guide both the teacher in teaching and the student in learning.
  • 88.
    What are theobjectives for testing?  In developing a written test, the cognitive behaviors of learning outcomes are usually targeted.  For the cognitive domain, it is important to identify the levels of behavior expected from students.
  • 89.
    What is atable of specifications?  Sometimes called a test blueprint.  A tool used by teachers to design a test.  It is a table that maps out the test objectives, contents, or topics covered by the test, the levels of cognitive behavior to be measured, the distribution of items, the number, placement, and weights of test items, and the test format.  Generally, TOS is prepared before a test is created.
  • 90.
    Why is TOSimportant?  Ensures that the instructional objectives and what the test captures match.  Ensures that the test developer will not overlook details that are considered essential to a good test.  Makes developing a test easier and more efficient.  Ensures that the test will sample all-important content areas and processes.  Is useful in planning and organizing.  Offers an opportunity for teachers and students to clarify achievement expectations.
  • 91.
  • 92.
    What are thegeneral steps in developing a TOS? 1. Determine the objectives of a test. For a written test, you can consider cognitive objectives, ranging from remembering to creating ideas, that could be measured using common formats for testing. 2. Determine the coverage of the test. Only topics or contents that have been discussed in class and are relevant should be included in the test.
  • 93.
    What are thegeneral steps in developing a TOS? 3. Calculate the weight for each topic. Topics/Learning Content Time Spent (No. of Hours) Percentage of Time Theories and Concepts 30 MINUTES 10 % Psychoanalytic Theories 90 MINUTES 30% Trait Theories 60 MINUTES 20% Humanistic Theories 30 MINUTES 10% Cognitive Theories 30 MINUTES 10% Behavioral Theories 30 MINUTES 10% Social Learning Theories 30 MINUTES 10% TOTAL 300 MINUTES (5 hrs) 100%
  • 94.
    What are thegeneral steps in developing a TOS? 4. Determine the number of items for the whole test.  To determine the number of items to be included in the test, the amount of time needed to answer the items is considered.  General rule: Students are given 30-60 seconds for each item in test formats with choices, hence, for a 1-hour class, the test should not exceed 60 items.  However, because you need also to give time for distribution etc., the number of items should be less or maybe just 50 items.
  • 95.
    What are thegeneral steps in developing a TOS? 5. Determine the number of items per topic. Topics/Learning Content Percentage of Time No. of Items Theories and Concepts 10 % 5 Psychoanalytic Theories 30% 15 Trait Theories 20% 10 Humanistic Theories 10% 5 Cognitive Theories 10% 5 Behavioral Theories 10% 5 Social Learning Theories 10% 5 TOTAL 100% 50 items
  • 96.
  • 97.
    What are thedifferent formats of a TOS? 1. One-Way TOS  Maps out the content or topic, test objectives, number of hours spent, and format, number, and placement of items.  This type of TOS is easy to develop and use because it just works around the objectives without considering the different levels of cognitive behaviors.
  • 98.
    What are thedifferent formats of a TOS? Sample of One-Way TOS
  • 99.
    What are thedifferent formats of a TOS? 2. Two-Way TOS  Reflects not only content, time spent, test content, and number of items but also the levels of cognitive behavior targeted per test content based on the theory behind cognitive testing.  One advantage is that it allows one to see the levels of cognitive skills and dimensions of knowledge that are emphasized by the test.  Also shows the framework of assessment used in the development of the test.
  • 100.
    What are thedifferent formats of a TOS? Sample of Two-Way TOS
  • 101.
    What are thedifferent formats of a TOS? 3. Three-Way TOS  Reflects the features of one-way and two-way TOS.  One advantage is that it challenges the test writer to classify objectives based on the theory behind the assessment.  It also shows the variability of thinking skills targeted by the test.  However, it takes much longer to develop this type of TOS.
  • 102.
    What are thedifferent formats of a TOS? Sample of Three-Way TOS
  • 103.
    ACTIVITY 5: Experiential Learning(TOS Makin) 1. Identify a subject in Elementary from Grades 1-6. 2. Ask for the Syllabus of the chosen subject to the subject teacher. 3. Using the TOS format of DMMMSU, create a TOS for the third or fourth quarter depending on the subject teacher’s choice. 4. Submit the created TOS to the subject teacher for corrections and scoring! 5. Submit the created TOS and the result of the evaluation to me.
  • 104.
    Rubrics for theTOS! 1 Point 2 Points 3 Points 4 Points 5 Points 1. Clarity of Objectives Objectives are unclear and misaligned. Some objectives are unclear or misaligned. Most objectives are clear and aligned. Objectives are clearly defined and well- aligned. Objectives are exceptionally clear and perfectly aligned. 2. Coverage of Content Major content areas are missing or incomplete. Some major content areas are missing. Most major content areas are covered. All major content areas are covered. All major content areas are comprehensively covered. 3. Depth of Content Content depth is lacking and superficial. Content depth is inconsistent. Content depth is adequate. Content depth is substantial. Content depth is profound and comprehensive. 4. Balance of Cognitive Levels Imbalance in cognitive levels. Some imbalance in cognitive levels. Adequate balance of cognitive levels. Good balance of cognitive levels. Excellent balance of cognitive levels. 5. Weightage of Objectives Objectives are poorly weighted. Some objectives are inaccurately weighted. Objectives are adequately weighted. Objectives are well- weighted. Objectives are perfectly weighted. 6. Clarity of Assessment Methods Assessment methods are unclear. Some assessment methods are unclear. Assessment methods are mostly clear. Assessment methods are clear and concise. Assessment methods are exceptionally clear. 7. Format and Organization Poorly organized and difficult to follow. Somewhat disorganized and challenging to follow. Mostly well- organized and easy to follow. Well-organized and easy to follow. Exceptionally well- organized and easy to follow. 8. Adaptability and Flexibility Table is not adaptable or flexible. Table has limited adaptability or flexibility. Table is somewhat adaptable and flexible. Table is adaptable and flexible. Table is highly adaptable and flexible.
  • 105.
  • 106.
  • 107.
    LEARNING OUTCOMES: 1. Identifythe appropriate test format to measure learning outcomes, and 2. Apply the general guidelines in constructing test items for different test formats.
  • 108.
    What are thegeneral guidelines for choosing the appropriate test format? 1. What are the objectives or desired learning outcomes of the subject/unit/lesson being assessed? 2. What level of thinking is to be assessed (remembering, understanding, applying, analyzing, evaluating, and creating?) Does the cognitive level of the test question match your instructional objectives? Note: R and U-use selected-response formats HOS-use constructed response formats 3. Is the test matched or aligned with the course’s DLOs and the course contents or learning activities? 4. Are the test items realistic to the students?
  • 109.
    What are themajor categories and formats of traditional tests? 1. Selected- Response Tests 2. Constructed- Response Tests  Require learners to choose the correct answers or best alternative from several choices.  While they cover a wide range of learning materials very efficiently and measure a variety of learning outcomes, they are limited when assessing learning outcomes that involve more complex and higher thinking skills.  Require learners to supply answers to a given question or problem.
  • 110.
    What are examplesof selected-responses tests? Test Description 1. Multiple Choice Test Is the most commonly used format in formal testing and typically consists of a stem (problem) one correct or best alternative (correct answer), and 3 or more incorrect or inferior alternatives (distractors). 2. True or False or Alternative Response Test It generally consists of a statement and deciding if the statement is true (accurate/correct) or false (inaccurate/incorrect). 3. Matching Types It consists of two sets of items to be matched with each other based on a specified attribute.
  • 111.
    What are examplesof constructed-response tests? Test Description 1. Short Answer Test It consist of open-ended questions or incomplete sentences that require learners to create an answer for each item, which is typically a single word or short phrase (completion, identification, enumeration). 2. Essay Test It consists of problems/questions that require learners to compose or construct written responses, usually long ones with several paragraphs. 3. Problem- Solving Test It consists of problems/questions that require learners to solve problems in quantitative or non- quantitative settings using knowledge and skills in mathematical concepts and procedures, and/or other HOT skills (reasoning, analysis, critical thinking).
  • 112.
    General Guidelines forWriting Multiple Choice Test Items Content 1. Write items that reflect only one specific content and cognitive processing skills. 2. Do not lift and use statements from the textbook or other learning materials as test questions. 3. Keep the vocabulary simple and understandable based on the level of learners/examinees. 4. Edit and proofread the items for grammatical and spelling before administering them to the learners.
  • 113.
    General Guidelines forWriting Multiple Choice Test Items STEM (Problem) 1. Write the directions in the stem clearly and understandably. 2. Write stems that are consistent in form and structure, that is present all items either in question form or in descriptive or declarative form. Faulty: 1. Who was the Philippine president during Martial Law? 2. The first president of the Commonwealth of the Philippines was?
  • 114.
    General Guidelines forWriting Multiple Choice Test Items STEM (Problem) 3. Word the stem positively and avoid double negatives, such as NOT and EXCEPT in a stem. If a negative word is necessary, underline or capitalize the word emphasis. Faulty: Which of the following is not a measure of variability? Good: Which of the following is NOT a measure of variability?
  • 115.
    General Guidelines forWriting Multiple Choice Test Items STEM (Problem) 4. Refrain from making the stem too wordy or containing too much information unless the problem/question requires the facts presented to solve the problem. FAULTY: What does DNA stand for, and what is the organic chemical of complex molecular structure found in all cells and viruses and codes genetic information for the transmission of inherited traits? GOOD: As a chemical compound, what does DNA stand for?
  • 116.
    General Guidelines forWriting Multiple Choice Test Items OPTIONS 1. Provide three (3) to five (5) options per item, with only one being the correct or best answer/alternative. 2. Write options that are parallel or similar in form and length to avoid giving clues about the correct answer. 3. Place options in a logical order (e.g., alphabetical, from shortest to longest). 4. Place correct responses randomly to avoid a discernable pattern of correct answers. 5. Use None-of-the-above carefully and only when there is one correct answer.
  • 117.
    General Guidelines forWriting Multiple Choice Test Items OPTIONS 6. Avoid All of the Above as an option, especially if it is intended to be the correct answer. 7. Make all options realistic and reasonable.
  • 118.
    What are thegeneral guidelines for writing matching-type items? Note: Matching type is most appropriate when you need to measure the learners’ ability to identify the relationship or association between similar items (parallel concepts). 1. Clearly state in the directions the basis for matching the stimuli with the responses. FAULTY: Match the following. GOOD: Column I is a list of countries while Column II presents the continent where these countries are located. Write the letter of the continent corresponding to the country on the line provided in Column I.
  • 119.
    What are thegeneral guidelines for writing matching-type items? 2. Ensure that the stimuli are longer and the response is shorter. (A-longer, B-shorter). 3. For each item, include only topics that are related to one another and share the same foundation of information. 4. Make the response options short, homogeneous, and arranged in logical order. 5. Include response options that are reasonable and realistic and similar in length and grammatical form. 6. Provide more response options than the number of stimuli.
  • 120.
    What are thegeneral guidelines for writing True or False items? 1. Include statements that are completely true or completely false. 2. Use simple and easy-to-understand statements. 3. Refrain from using negatives-especially double negatives. FAULTY: There is nothing illegal about buying goods through the internet. GOOD: It is legal to buy things or goods through the internet. 4. Avoid using absolutes such as “always” and “never”.
  • 121.
    What are thegeneral guidelines for writing True or False items? 5. Express a single idea in each test item. 6. Avoid the use of unfamiliar words or vocabulary. 7. Avoid lifting statements from the textbook and other learning materials.
  • 122.
    What are thedifferent variations of True or False items. 1. T-F Correction or Modified True-or- False Question 2. Yes-No Variation 3. A-B Variation  The statement is presented with a keyword or phrase that is underlined, and the learner has to supply the correct word or phrase. Example: MC test is authentic.  The learner has to choose yes or no, rather than true or false. Example: The following are kinds of test. Circle YES if it is an authentic test and NO if not. tests  The learner has to choose A or B, rather than True or False. Example: Indicate which of the following are traditional or authentic tests by circling A if it is a traditional test and B if it is authentic.
  • 123.
    What are thegeneral guidelines for writing short-answer test items? Note: fill-in-the-blank or completion test items 1. Omit only the significant words from the statement. FAULTY: Every atom has a central ___ called a nucleus. GOOD: Every atom has a central core called a _____. 2. Do not omit too many words from the statement such as that the intended meaning is lost. 3. Avoid obvious clues to the correct response. 4. Be sure that there is only one correct response. FAULTY: The government should start using renewable energy sources for generating electricity, such as ____. GOOD: The government should start using renewable sources of energy by using turbines called _____.
  • 124.
    What are thegeneral guidelines for writing short-answer test items? 5. Avoid grammatical clues to the correct response. Use: a(n) 6. If possible, put the blank at the end of a statement rather than at the beginning. FAULTY: ____ is the basic building block of matter. GOOD: The basic building block of matter is ____.
  • 125.
    What are thegeneral guidelines for writing essay tests? Note:  Essay tests are the preferred form of assessment when teachers want to measure learners’ higher- order thinking skills, particularly their ability to reason, analyze, synthesize, and evaluate.  They are most appropriate for assessing learners’ : 1. Understanding of subject-matter content, 2. Ability to reason with their knowledge of the subject, 3. Problem-solving and decision skills because items or situations presented in the test are authentic or close to real life experiences.
  • 126.
    What are thegeneral guidelines for writing essay tests? 1. Clearly define the intended learning outcome to be assessed by the essay test. Use verbs such as compose, analyze, interpret, explain, and justify, among others. 2. Refrain from using essay tests for intended learning outcomes that are better assessed by other kinds of assessment. 3. Clearly define and situate the task within a problem situation as well as the type of thinking required to answer the test. 4. Present tasks that are fair, reasonable, and realistic to the students. 5. Be specific in the prompts about the time allotment and criteria for grading the response.
  • 127.
    What are twotypes of essay tests? EXTENDED-RESPONSE Requires much longer and more complex responses Imagine you are a superhero helping the environment. Describe three things you would do to save nature in your neighborhood. Explain why each action is important and how it helps plants, animals, and people. Give specific examples. RESTRICTED-RESPONSE Is much more focused and restrained Think about a time you helped a friend or a friend helped you. Describe the situation and how you felt. What did you learn about friendship and kindness? Give examples.
  • 128.
    What are thegeneral guidelines for writing problem-solving tests? 1. Identify and explain the problem clearly. 2. Be specific and clear about the type of response required from the students. 3. Specify in the directions the basis for grading students’ answers/procedures.
  • 129.
    What are thedifferent variations of quantitative problem-solving items? 1. One answer choice 2. All possible answer choices 3. Type-in answer  This type of question contains 4/5 options, and students are required to choose the best answer. EXAMPLE: What is the mean of the following score distribution: 32, 44, 56, 69?  This type of question has 4/5 options and students are required to choose all of the correct options. EXAMPLE: Which of the is/are the correct measure/s of central tendency? Indicate all possible answers.  This type of question does not provide options to choose from. Instead, the learners are asked to supply the correct answer. EXAMPLE: Compute the mean of the following score distribution: 32, 44, and 56. Indicate your answer in the blank provided.
  • 130.
    ACTIVITY 6: Experiential Learning(Writing a Test) 1. Review the Table of Specifications (TOS):  Carefully review the TOS provided by your instructor for the subject and grading period.  Understand the objectives, content areas, cognitive levels, and assessment methods outlined in the TOS. 2. Identify Key Objectives and Content Areas:  Identify the key objectives and content areas specified in the TOS.  Understand the depth and breadth of knowledge expected for each objective. 3. Determine Assessment Methods:  Pay attention to the assessment methods specified in the TOS for each objective or content area.  Understand the types of questions or tasks that will be used to assess your understanding.
  • 131.
    ACTIVITY 6: Experiential Learning(Writing a Test) 4. Craft Test Questions or Tasks:  Based on the objectives and content areas outlined in the TOS, craft test questions or tasks that align with each objective.  Ensure that the questions or tasks address the cognitive levels specified in the TOS. 5. Distribute Questions or Tasks Evenly:  Distribute the questions or tasks evenly across the content areas and cognitive levels specified in the TOS.  Ensure a balanced representation of different types of questions or tasks (e.g., multiple choice, short answer, essay) if applicable. 6. Consider Time and Resources:  Consider the time allocated for the test and the resources available for assessment.  Ensure that the test can be completed within the allotted time frame and with the available resources.
  • 132.
    ACTIVITY 6: Experiential Learning(Writing a Test) 7. Ensure Clarity and Fairness:  Ensure that test questions or tasks are clear, concise, and free of ambiguity.  Avoid biased language or content that may disadvantage certain groups of students. 8. Review, Seek Feedback and Revise:  Review the test questions or tasks to ensure alignment with the TOS and clarity of assessment.  Use feedback to refine and improve the test before finalizing it for administration.  Revise as needed to address any gaps or inconsistencies identified during the review process. 9. Finalize the Test:  Make any final adjustments or revisions based on feedback received.  Ensure that the test is formatted and organized in a clear and accessible manner for students.
  • 133.
    RUBRIC IN GRADINGTHE TEST Criteria 1 Point 2 Points 3 Points 4 Points 5 Points 1. Alignment with TOS Test objectives and content poorly align with TOS. Some alignment with TOS objectives and content. Adequate alignment with TOS objectives and content. Good alignment with TOS objectives and content. Excellent alignment with TOS objectives and content. 2. Clarity and Understandability Test questions/tasks are confusing and unclear. Some questions/tasks are unclear. Most questions/tasks are clear and understandable. Questions/tasks are clear and understandable. Questions/tasks are exceptionally clear and understandable. 3. Coverage of Content Key content areas are poorly covered or omitted. Some key content areas are inadequately covered. Most key content areas are adequately covered. All key content areas are well- covered. All key content areas are comprehensively covered. 4. Cognitive Level Representation Few questions/tasks align with specified cognitive levels. Some questions/tasks align with specified cognitive levels. Most questions/tasks align with specified cognitive levels. Questions/tasks align well with specified cognitive levels. Questions/tasks align perfectly with specified cognitive levels. 5. Variety of Question Types Limited variety of question types. Some variety of question types. Adequate variety of question types. Good variety of question types. Excellent variety of question types. 6. Clarity of Instructions Instructions are unclear and confusing. Some instructions are unclear. Most instructions are clear and understandable. Instructions are clear and understandable. Instructions are exceptionally clear and understandable. 7. Organization and Format Test lacks organization and proper formatting. Some organization and formatting issues present. Test is mostly well-organized and formatted. Test is well-organized and formatted. Test is exceptionally well- organized and formatted. 8. Accuracy of Content Content is inaccurate or misleading. Some inaccuracies or inconsistencies in content. Content is mostly accurate and consistent. Content is accurate and consistent. Content is exceptionally accurate and consistent. 9. Revision and Improvement Test lacks opportunities for revision and improvement. Limited opportunities for revision and improvement. Some opportunities for revision and improvement. Adequate opportunities for revision and improvement. Comprehensive opportunities for revision and improvement. 10. Adherence to Assessment Policies Test does not adhere to assessment policies. Limited adherence to assessment policies. Mostly adheres to assessment policies. Adheres to assessment policies. Fully adheres to assessment policies.
  • 134.
  • 135.
  • 136.
  • 137.
    What is testvalidity?  A measure is valid when it measure what it is supposed to measure.  If a quarterly exam is valid, then the contents should directly measure the objectives of the curriculum.
  • 138.
    What are thedifferent ways to establish test validity? TYPE OF VALIDITY DEFINITION PROCEDURE 1. CONTENT VALIDITY When the items represent the domain being measured. The items are compared with the objectives of the program. The items need to measure directly the objectives or definition. A reviewer conducts the checking. 2. FACE VALIDITY When the test is presented well, free of errors, and administered well. The test items and layout are reviewed and tried out on a small group of respondents. A manual for administration can be made as a guide for the test administrator.
  • 139.
    What are thedifferent ways to establish test validity? TYPE OF VALIDITY DEFINITION PROCEDURE 3. PREDICTIVE VALIDITY A measure should predict a future criterion. Example is an entrance exam predicting the grades of the students after the first semester. A correlation coefficient is obtained where the X-variable is used as the predictor and the Y-variable as the criterion. 4. CONSTRUC T VALIDITY The components or factors of the test should contain items that are strongly correlated. The Pearson r can be used to correlate the items for each factors. However, there is a technique called factor analysis to determine which items are highly correlated to form a factor.
  • 140.
    What are thedifferent ways to establish test validity? TYPE OF VALIDITY DEFINITION PROCEDURE 5. CONCURREN T VALIDITY When two or more measures are present for each examinee that measure the same characteristic. The scores on the measures should be correlated. 6. CONVERGEN T VALIDITY When the components of factors of a test are hypothesized to have a positive correlation. Correlation is done for the factors of the test. 7. DIVERGENT VALIDITY When the components or factors of a test are hypothesized to have a negative correlation. An example to correlate are the scores in a test on intrinsic Correlation is done for the factors of the test.
  • 141.
    What is testreliability?  Reliability is the consistency of the responses to measure under three conditions: (1) when retested on the same person; (2) when retested on the same measure; and (3) similarity of responses across items that measure the same characteristics.
  • 142.
    What are thedifferent factors that affect the reliability measure? 1. The number of items in a test- The more items a test has, the likelihood of reliability is high. The probability of obtaining consistent scores is high because of the large pool of items. 2. Individual differences of participants- Every participant possesses characteristics that affect their performance in a test, such as fatigue, concentration, innate ability, perseverance, and motivation. These individual factors change over time and affect the consistency of the answers in a test. 3. External environment- The external environment may include room temperature, noise level, depth of instruction, exposure to materials, and quality of instruction, which could affect changes in the responses of examinees in a test.
  • 143.
    What are thedifferent ways to establish test reliability? METHOD OF TESTING PROCEDURE STATISTICAL TOOL TO BE USED 1. Test- Retest  Using Pre-Test and Post-test.  Time interval of a minimum of 30 minutes and maximum of 6 months.  Applicable for tests that measure stable variables, such as aptitude and psychomotor measures.  Correlate the scores using Pearson Product Moment Correlation of Pearson r.  Significant and positive correlation indicates that the test has temporal stability overtime.
  • 144.
    What are thedifferent ways to establish test reliability? METHOD OF TESTING PROCEDURE STATISTICAL TOOL TO BE USED 2. Parallel Forms  There are two versions of the test.  Administer one form at one time and the other form to another time to the same group.  Done when test is repeatedly used for different groups such as entrance and licensure exams.  Correlate the scores from the forms (test versions) using Pearson r.
  • 145.
    What are thedifferent ways to establish test reliability? METHOD OF TESTING PROCEDURE STATISTICAL TOOL TO BE USED 3. Split-Half  Administer a test to a group of examinees.  Items need to be split into halves, od- even technique.  Correlate the sum scores of the odd and even from the examinees.  Applicable when test has a large number of items.  Correlate the scores using Pearson r.  After correlation, use another formula called Spearman-Brown Coefficient.  The correlation obtained using Pearson r and Spearman Brown should be positive to mean that the test has internal consistency.
  • 146.
    What are thedifferent ways to establish test reliability? METHOD OF TESTING PROCEDURE STATISTICAL TOOL TO BE USED 4. Test of Internal Consistency Using Kuder- Richardson and Cronbach’s Alpha Method  Use to determine if the scores for each item are consistently answered by the examinees.  After test administration, determine and record the scores.  Mostly applicable for scales and inventories.  Use statistical analysis called Cronbach’s alpha or Kuder Richardson.  A Cronbach’s alpha value of 0.60 and above indicates that the test have internal consistency.
  • 147.
    What are thedifferent ways to establish test reliability? METHOD OF TESTING PROCEDURE STATISTICAL TOOL TO BE USED 5. Inter-rater Reliability  Use to determine the consistency of multiple raters when using rating scales and rubrics to judge performance.  The reliability here refers to the similar or consistent ratings provided by more than 1 rater.  Kendall’s rau coefficient of concordance is used to determine if the ratings provided by multiple raters agree with each other.  Significant Kendall’s tau value indicates that raters concur or agree with each other in their ratings.
  • 148.
    How to determinethe strength of correlation?  The strength of correlation is determined trough the correlation coefficient value. 0.80-1.00= Very strong relationship 0.60-0.79=Strong Relationship 0.40-0.59=Moderate/Substantial Relationship 0.20-0.39=Weak Relationship 0.00-0.19=Negligible relationship
  • 149.
    How to determineif an item is easy or difficult?  An item is difficult if majority of students are unable to provide the correct answer.  An item is easy if majority of the students are able to answer correctly.  An item can discriminate if the examinees who score high in the test can answer more the items correctly that examinees who got low scores.
  • 150.
    How to determineif an item is easy or difficult? 1. Get the total score of each students and arrange scores from highest to lowest. Item 1 Item 2 Item 3 Item 4 Item 5 S1 X X C C C S2 C C C X C S3 X X X C C S4 X X X X C S5 X C C C C S6 C X C C 0 S7 X X C C X S8 X C C X X S9 C X C C C S10 C X C C X
  • 151.
    How to determineif an item is easy or difficult? 1. Get the total score of each students and arrange scores from highest to lowest. I1 I2 I3 I4 I5 TOTAL SCORE S2 C C C X C 4 S5 X C C C C 4 S9 C X C C C 4 S1 X X C C C 3 S6 C X C C X 3 S10 C X C C X 3 S3 X X X C C 2 S7 X X C C X 2 S8 X C C X X 2 S4 X X X X C 1
  • 152.
    How to determineif an item is easy or difficult? 2. Obtain the upper and lower 27% of the group.  Multiply 0.27 by the total number of students. 2.7  The round the number value. 3  Get the top 3 students and the bottom 3 students based on the total scores. TOP 3: S2, S5, S9 BOTTOM 3: S7, S8, S4
  • 153.
    How to determineif an item is easy or difficult? 3. Obtain the proportion correct for each item.  This is computed for the upper 27% group and the lower 27% group.  This is done by summating the correct answer per item and dividing it by the total number of students,
  • 154.
    How to determineif an item is easy or difficult? I1 I2 I3 I4 I5 TOTAL SCORE S2 C C C X C 4 S5 X C C C C 4 S9 C X C C C 4 TOTAL 2 2 3 2 3 P OF HG 0.67 0.67 1 0.67 1 Divide by 3 S7 X X C C X 2 S8 X C C X X 2 S4 X X X X C 1 TOTAL 0 1 2 1 1 P OF LG 0 0.33 0.67 0.33 0.33 Divide by 3
  • 155.
    How to determineif an item is easy or difficult? 4. The item difficulty is obtained using the following formula: pH + pL Item difficulty = 2 Difficulty Index Remark 0.76 or higher Easy Item 0.25 to 0.75 Average Item 0.24 or lower Difficult Item
  • 156.
    How to determineif an item is easy or difficult? 4. The item difficulty is obtained using the following formula: I1 I2 I3 I4 I5 Index of difficult y 0.33 0.50 0.83 0.50 0.67 Remark Difficult Average Easy Average Average
  • 157.
    How to determineif an item is easy or difficult? 5. The index of discrimination is obtained using the formula: Item discrimination = pH-pL Index discrimination Remark 0.40 and above Very good item 0.30-0.39 Good item 0.20-0.29 Reasonably Good Item 0.10-0.19 Marginal Item Below 0.10 Poor Item
  • 158.
    How to determineif an item is easy or difficult? 5. The index of discrimination is obtained using the formula: Item discrimination = pH-pL I1 I2 I3 I4 I5 0.67-0 0.67-0.33 1.00-0.67 0.67-0.33 1.00-0.33 Index of difficulty 0.67 0.34 0.33 0.34 0.67 Remark Very Good Item Good Item Good Item Good Item Very Good Item
  • 159.
    ACTIVITY 1: Determinethe difficulty and discrimination index of the following items: Item 1 Item 2 Item 3 Item 4 Item 5 S1 C C C C C S2 C C C X C S3 X X X X X S4 X X X X C S5 X C C C C S6 C X C C 0 S7 X X C C X S8 X C C X X S9 C X X X X S10 C X C C X
  • 160.
    ACTIVITY 1: ANSWER Item1 Item 2 Item 3 Item 4 Item 5 S1 C C C C C S2 C C C X C S5 X C C C C .67 1 1 .67 1 S7 X X C C X S4 X X X X C S9 C X X X X .33 0 .33 .33 .33 Difficulty .50 (Ave) .50 (Ave) .67 (Ave) .50 (Ave) .67 (Ave) Discrimina tion .34 (Good) 1 (Very Good) .67 (Very Good) .34 (Good) .67 (Very Good)
  • 161.
  • 162.
    LESSON 7: Organization ofTest Data Using Tables and Graphs
  • 163.
    How do wepresent test data graphically? 1. Histogram  Histogram is a type of graph appropriate for quantitative data such as test scores.  This graph consists of columns-each has a base that represents one class interval, and its height represents the number of frequency in the class interval.
  • 164.
    How do wepresent test data graphically? 2. Frequency Polygon  Also used for quantitative data, and it is one of the most commonly used methods in presenting test scores.  It is very similar to histogram, but instead of bars, it uses lines to compare sets of test data in the same axes.
  • 165.
    How do wepresent test data graphically? 3. Cumulative Frequency Polygon