ANALYSIS & VALIDATION
OBJECTIVES:
At the end of the lesson the learner will be able to:
1. Explain the meaning of item analysis, item validity,
reliability, item difficulty, discrimination index
2. Determine the validity and reliability of given test
items
3. Determine the quality of a test item by its difficulty
index, discrimination index and plausibility of options (
for a selected response test)
ITEM ANALYSIS
•Item analysis is the process of examining test
items (questions) after they have been
administered to determine their quality,
effectiveness, and fairness. It helps identify which
items are good, which are too easy or too hard,
and which may be misleading or unclear.
Purpose of Item Analysis:
•Improve the quality of test questions.
•Ensure that the test measures the intended learning
outcomes.
•Identify and remove faulty or ambiguous items.
•Enhance the reliability and validity of the
assessment.
TYPES OF QUANTITATIVE ITEM ANALYSIS
There are three common types of quantitative item
analysis which provide teachers with three
different types of information about individual test
items. These are difficulty index, discrimination
index and distractor analysis.
Difficulty Index
It is a measure of the proportion of examinees who
answered the item correctly.
The difficulty of an item or item difficulty is defined as
the number of students who are able to answer the item
correctly divided by the total number of students.
•Thus:
• number of students with correct answer
• total number of students
• Example:
• What is the item difficulty index of an item if 14 students are unable to
answer it correctly while 26 answered it correctly?
Item difficulty = 26 / 40
= 0.65 or 65%
To determine the level of difficulty of an item, find first
the difficulty index using the formula and identify the
level of difficulty using the range given below.
One problem with this type of difficulty index is
that it may not actually indicate that the item is
difficult (or easy). A student who does not know the
subject matter will naturally be unable to answer the
item correctly even if the question is easy. We are
therefore interested in deriving a measure that will tell
us whether an item can discriminate between these
two groups of students.
Discrimination Index
It is the difference between the proportion of the top
scorers who got an item correct and the proportion of
the lowest scorers who got the item right.
An easy way to derive such a measure is to
measure how difficult an item is with respect to those in
the upper 25% of the class and how difficult it is with
respect to those in the lower 25% of the class.
Index of discrimination = DU - DL
(DU - Upper group; DL – Lower group)
•Example: Obtain the index of discrimination of an item if the
upper 25% of the class had a difficulty index of 0.60 (i.e.
60% of the upper 25% got the correct answer) while the
lower 25% of the class had a difficulty index of 0.20.
•Here, DU = 0.60 while DL = 0.20,
•thus index of discrimination = 0.60 - 0.20 = 0.40
To determine the level of discrimination of an item, find
first the discrimination index using the formula and
identify the level of it using the range given below.
• Consider a multiple-choice type of test of which the following
data were obtained. A class is composed of 50 students. Use
25% to get the upper and the lower groups. Analyze the item
given the following results. Option B is the correct answer.
What will you do with the test item?
Let us compute for the
Difficulty Index
and the
Discrimination Index
Difficult Index = number of students with correct answer
total number of students
= 30 / 50
= 0.6 or 60%
Interpretation: Right Difficulty
Action to be taken: Retain
DU = no. of students in upper 25% with correct response
no. of students in the upper 25%
= 12/13
= 0.92 or 92% DL = no. of students in lower 25% with correct response
no. of students in the lower 25%
= 3/13
= 0.23 or 23%
Discrimination Index = DU – DL
= 0.92 – 0.23
= 0.69 or 69%
•Distractor Analysis – It checks if the wrong
answers (distractors) are working well.
•A good distractor: Attracts some low-scoring students
(those who didn’t master the lesson).
•A bad distractor: Almost nobody chooses it —
meaning it’s too obviously wrong and not doing its
job.
Type What Happen Meaning
Good Distractor Low scorers pick it
sometimes, high
scorers rarely pick it
Keep it
Bad Distractor Nobody (or very
few) chooses it
Revise or Replace
Misleading
Distractor
High scorers choose
it often
Fix it — it may be
confusing
VALIDATION IN ASSESSMENT
•Validation is the process of ensuring that the assessment truly
measures what it is intended to measure. It focuses on the
accuracy, fairness, and relevance of the test.
Types of Validity:
Content Validity – The test covers all intended topics and objectives.
Example: A math exam for Grade 6 includes questions on all topics
taught during the school year (fractions, decimals, geometry), not just on
fractions.
Construct Validity – The test truly measures the intended concept or
skill.
Example: A “self-confidence questionnaire” actually measures
confidence and not unrelated traits like intelligence or popularity.
Criterion-related Validity – The test correlates well with other
measures of the same ability (predictive or concurrent).
Example: A college entrance exam score predicts how well students will
perform in their first year of college.
•Steps in Validation:
1. Planning Stage – Align test items with learning objectives
and the Table of Specifications (TOS).
2. Expert Review – Ask content specialists to review the
items for accuracy, clarity, and appropriateness.
3. Pilot Testing – Administer the test to a small group before
the actual administration.
4. Statistical Analysis – Use item analysis results to revise
items.
5. Revision & Finalization – Edit or replace problematic
items before the final administration.
Thank you for listening…
ANALYSIS & VALIDATION.pptxA broken clock that’s always 5 minutes late — it’s consistent (reliable) but wrong (not valid).

ANALYSIS & VALIDATION.pptxA broken clock that’s always 5 minutes late — it’s consistent (reliable) but wrong (not valid).

  • 1.
  • 2.
    OBJECTIVES: At the endof the lesson the learner will be able to: 1. Explain the meaning of item analysis, item validity, reliability, item difficulty, discrimination index 2. Determine the validity and reliability of given test items 3. Determine the quality of a test item by its difficulty index, discrimination index and plausibility of options ( for a selected response test)
  • 3.
    ITEM ANALYSIS •Item analysisis the process of examining test items (questions) after they have been administered to determine their quality, effectiveness, and fairness. It helps identify which items are good, which are too easy or too hard, and which may be misleading or unclear.
  • 4.
    Purpose of ItemAnalysis: •Improve the quality of test questions. •Ensure that the test measures the intended learning outcomes. •Identify and remove faulty or ambiguous items. •Enhance the reliability and validity of the assessment.
  • 5.
    TYPES OF QUANTITATIVEITEM ANALYSIS There are three common types of quantitative item analysis which provide teachers with three different types of information about individual test items. These are difficulty index, discrimination index and distractor analysis.
  • 6.
    Difficulty Index It isa measure of the proportion of examinees who answered the item correctly. The difficulty of an item or item difficulty is defined as the number of students who are able to answer the item correctly divided by the total number of students.
  • 7.
    •Thus: • number ofstudents with correct answer • total number of students • Example: • What is the item difficulty index of an item if 14 students are unable to answer it correctly while 26 answered it correctly? Item difficulty = 26 / 40 = 0.65 or 65%
  • 8.
    To determine thelevel of difficulty of an item, find first the difficulty index using the formula and identify the level of difficulty using the range given below.
  • 9.
    One problem withthis type of difficulty index is that it may not actually indicate that the item is difficult (or easy). A student who does not know the subject matter will naturally be unable to answer the item correctly even if the question is easy. We are therefore interested in deriving a measure that will tell us whether an item can discriminate between these two groups of students.
  • 10.
    Discrimination Index It isthe difference between the proportion of the top scorers who got an item correct and the proportion of the lowest scorers who got the item right.
  • 11.
    An easy wayto derive such a measure is to measure how difficult an item is with respect to those in the upper 25% of the class and how difficult it is with respect to those in the lower 25% of the class.
  • 12.
    Index of discrimination= DU - DL (DU - Upper group; DL – Lower group) •Example: Obtain the index of discrimination of an item if the upper 25% of the class had a difficulty index of 0.60 (i.e. 60% of the upper 25% got the correct answer) while the lower 25% of the class had a difficulty index of 0.20. •Here, DU = 0.60 while DL = 0.20, •thus index of discrimination = 0.60 - 0.20 = 0.40
  • 13.
    To determine thelevel of discrimination of an item, find first the discrimination index using the formula and identify the level of it using the range given below.
  • 14.
    • Consider amultiple-choice type of test of which the following data were obtained. A class is composed of 50 students. Use 25% to get the upper and the lower groups. Analyze the item given the following results. Option B is the correct answer. What will you do with the test item?
  • 15.
    Let us computefor the Difficulty Index and the Discrimination Index
  • 16.
    Difficult Index =number of students with correct answer total number of students = 30 / 50 = 0.6 or 60% Interpretation: Right Difficulty Action to be taken: Retain
  • 17.
    DU = no.of students in upper 25% with correct response no. of students in the upper 25% = 12/13 = 0.92 or 92% DL = no. of students in lower 25% with correct response no. of students in the lower 25% = 3/13 = 0.23 or 23% Discrimination Index = DU – DL = 0.92 – 0.23 = 0.69 or 69%
  • 18.
    •Distractor Analysis –It checks if the wrong answers (distractors) are working well. •A good distractor: Attracts some low-scoring students (those who didn’t master the lesson). •A bad distractor: Almost nobody chooses it — meaning it’s too obviously wrong and not doing its job.
  • 19.
    Type What HappenMeaning Good Distractor Low scorers pick it sometimes, high scorers rarely pick it Keep it Bad Distractor Nobody (or very few) chooses it Revise or Replace Misleading Distractor High scorers choose it often Fix it — it may be confusing
  • 20.
    VALIDATION IN ASSESSMENT •Validationis the process of ensuring that the assessment truly measures what it is intended to measure. It focuses on the accuracy, fairness, and relevance of the test.
  • 21.
    Types of Validity: ContentValidity – The test covers all intended topics and objectives. Example: A math exam for Grade 6 includes questions on all topics taught during the school year (fractions, decimals, geometry), not just on fractions. Construct Validity – The test truly measures the intended concept or skill. Example: A “self-confidence questionnaire” actually measures confidence and not unrelated traits like intelligence or popularity. Criterion-related Validity – The test correlates well with other measures of the same ability (predictive or concurrent). Example: A college entrance exam score predicts how well students will perform in their first year of college.
  • 22.
    •Steps in Validation: 1.Planning Stage – Align test items with learning objectives and the Table of Specifications (TOS). 2. Expert Review – Ask content specialists to review the items for accuracy, clarity, and appropriateness. 3. Pilot Testing – Administer the test to a small group before the actual administration. 4. Statistical Analysis – Use item analysis results to revise items. 5. Revision & Finalization – Edit or replace problematic items before the final administration.
  • 23.
    Thank you forlistening…