Test Construction
Presented By:
Dr. Ijaz Hussain
STAGES OF TEST CONSTRUCTION
1. Determining the purpose of a test
2. Designing clear, unambiguous objectives
3. Drawing up test specification
4. Test construction / Item writing
5. Pre-testing
6. Test administration
7. Scoring and reporting
8. Test interpretation
9. Item analysis
1. Determining the purpose of a test
 What kind of test is it going to be? Achievement, general proficiency,
diagnostic, etc.
 What skill do I want to test.
 What kind of backwash effect do I want?
 What do I want to be able to say or do with the results I obtain
 What are the practical constraints I have to work?
Samples of a test with its purposes
Phase when test is
administered
Types of test Purpose ofTest
During (Formative) Topic/Progress test which
test how well pupils have
learnt
To obtain information about
individual pupils level of
mastery of skill taught
End (Summative) Achievement test To evaluate effectiveness of
teaching/material/methods
2. DESIGNING CLEAR,
UNAMBIGUOUS OBJECTIVES
 Every curriculum should have appropriately
framed, assessable objectives, stated in terms
of overt performance by students.
 In designing a test, determine appropriate
objetives, stated as explicitly as possible.
 State the possible elements of both
comprehension and production
(read Brown page 57 – 58)
3. DRAWING UP TEST SPECIFICATIONS
 A test should have a structure that follows
logically from the lesson or unit you are testing.
 Think of your test specifications as a blueprint
of the test that include the following
A description of its content
Item types (such as mcq, close)
Tasks (eg. written essay, short passage)
Skills to be included
How the test will be scored
How it will be reported
3. DRAWING UP TEST SPECIFICATIONS
1. Tasks
 to give a good indication of the skills tested.
2. Types of text
 refer to what the candidate has to process (reading
and listening) and to produce (writing and speaking)
 number of texts need to be specify
3. Topics
 Select topics from syllabus specifications or
 Topic within the maturational level of the students
 How long the task would take.
3. DRAWING UP TEST SPECIFICATIONS
4. Format
 The number of items in each sub-test should be
specified.
 Use format familiar to students.
5. Weightage
 Not all skills are equally important
 Allocate different marks to different sections of
the test
6. Time allocation
 How important the skill is
 How long the task would take
3. DRAWING UP TEST SPECIFICATIONS
 Many tests have a design that:
◦ Divides them into a number of sections
◦ Offer students a variety of test types
◦ Gives an appropriate relative weight to each section
Sect
ion
Skills Format Number
of items
Marks
A Listening: Main idea
inference
MCQ 10 20
B Speaking: describing
people
Intrview: Using picture
Stimuli to describe people
15
C Reading for
meaning 150 word
test
Rational close, open-ended
questions
20
5
10
10
D Writing:
description of
places
a. 150-word guide
composition
b. 20 blank close
20
10
4. TEST CONSTRUCTION / ITEM WRITING
Guidelines for test construction:
1. Work as a team
2. Vet each other’s work at every juncture
3. Vet the stimulus/input material
◦ Appropriateness
◦ Balance and bias
4. After the test items are written, vet each
component.
5. Finally, look at the test as a whole.
◦ Common test format:
MCQ
Cloze test
5. PRE-TESTING
PURPOSE
 Helps to identify poor distractors
 Gives the test writer a chance to improve poor
items
PRINCIPLES FOR PRE-TESTING
 The tester should administer the newly-developed
test to a group of examinees similar to the target
group and the purpose is to analyze every individual
item as well as the whole test.
 Numerical data (test results) should be collected to
check the efficiency of the item, it should include
item facility and discrimination.
6.TEST ADMINISTRATION
 Guidelines to consider to ensure that the
actual administration of the test
accomplishes everything you want to:
1. when and where the test will be
administered?
2. how will it be administered?
3. who will administer the test?
4. what facilities/apparatus would be
necessary for the successful
administration of the test?
7. SCORING AND REPORTING
Scoring
 The scoring plan reflects the relative weight place
on each section and on the items on each section.
 Objective tests have a pre-determined answer,
however in subjective tests many decisions have to
be made.
Reporting
 The most common way is in terms of grades.
 Sometimes pupils are ran ordered according to
the scores they obtained and a class position is
recorded.
ITEM ANALYSIS
Definition
It is statistical technique which is used for selecting
& rejecting the items of test on basis of their
difficulty value and the discrimination power. It is
scientific way for improving the quality of test.
Purpose
To increase the effectiveness of test
To obtain information about difficulty level of all item
To select the appropriate item for the final draft
Types of Item Analysis
 1. Qualitative
 2. Quantitative
Item analysis is done for obtaining
1. Difficulty value
2. Discrimination Power
Item Analysis
Difficulty Level Index is simply the
percentage of students who answer an item
analysis
Donated by P
Ranges from 0-100
Formula: P= (H+L)/N * 100
Item Analysis
Item Discrimination refers to ability of an
item to differentiate among students on
basis of intelligent and weak students
Range of Item Discrimination= +1- -1
Formula: I.D= (RH-RL)/N.H or N.L
ITEM ANALYSIS
2. The discrimination index to find out how well a
test managed to separate the good students from
the poor students.
 Perfect discrimination is the score of 1. This
means all the good candidates got the item
correct and all the poor students got the item
wrong.This score is seldom obtained.
 Generally, an item that has a discrimination index
of less than 3 is not considered good and may be
remove from the test.
 Negative discrimination index means that it is
easier for poor students than for good students.
TUTORIAL
Pairwork:
 Draw up a table of specifications that
reflect both the purpose and the
objectives of the test
 Discuss the importance of test
specifications for the purpose of
assessment
References
 Brown, H. D., & Abeywickrama, P. (2004).
Language assessment. Principles and
Classroom Practices.White Plains, NY: Pearson
Education, 20.
 Chitravelu, N., Sithamparam, S., & Teh, S. C.
(2005). ELT methodology: Principles and
practice. Oxford Fajar.

Test Construction, drawing up test Specifications.

  • 1.
  • 2.
    STAGES OF TESTCONSTRUCTION 1. Determining the purpose of a test 2. Designing clear, unambiguous objectives 3. Drawing up test specification 4. Test construction / Item writing 5. Pre-testing 6. Test administration 7. Scoring and reporting 8. Test interpretation 9. Item analysis
  • 3.
    1. Determining thepurpose of a test  What kind of test is it going to be? Achievement, general proficiency, diagnostic, etc.  What skill do I want to test.  What kind of backwash effect do I want?  What do I want to be able to say or do with the results I obtain  What are the practical constraints I have to work? Samples of a test with its purposes Phase when test is administered Types of test Purpose ofTest During (Formative) Topic/Progress test which test how well pupils have learnt To obtain information about individual pupils level of mastery of skill taught End (Summative) Achievement test To evaluate effectiveness of teaching/material/methods
  • 4.
    2. DESIGNING CLEAR, UNAMBIGUOUSOBJECTIVES  Every curriculum should have appropriately framed, assessable objectives, stated in terms of overt performance by students.  In designing a test, determine appropriate objetives, stated as explicitly as possible.  State the possible elements of both comprehension and production (read Brown page 57 – 58)
  • 5.
    3. DRAWING UPTEST SPECIFICATIONS  A test should have a structure that follows logically from the lesson or unit you are testing.  Think of your test specifications as a blueprint of the test that include the following A description of its content Item types (such as mcq, close) Tasks (eg. written essay, short passage) Skills to be included How the test will be scored How it will be reported
  • 6.
    3. DRAWING UPTEST SPECIFICATIONS 1. Tasks  to give a good indication of the skills tested. 2. Types of text  refer to what the candidate has to process (reading and listening) and to produce (writing and speaking)  number of texts need to be specify 3. Topics  Select topics from syllabus specifications or  Topic within the maturational level of the students  How long the task would take.
  • 7.
    3. DRAWING UPTEST SPECIFICATIONS 4. Format  The number of items in each sub-test should be specified.  Use format familiar to students. 5. Weightage  Not all skills are equally important  Allocate different marks to different sections of the test 6. Time allocation  How important the skill is  How long the task would take
  • 8.
    3. DRAWING UPTEST SPECIFICATIONS  Many tests have a design that: ◦ Divides them into a number of sections ◦ Offer students a variety of test types ◦ Gives an appropriate relative weight to each section Sect ion Skills Format Number of items Marks A Listening: Main idea inference MCQ 10 20 B Speaking: describing people Intrview: Using picture Stimuli to describe people 15 C Reading for meaning 150 word test Rational close, open-ended questions 20 5 10 10 D Writing: description of places a. 150-word guide composition b. 20 blank close 20 10
  • 9.
    4. TEST CONSTRUCTION/ ITEM WRITING Guidelines for test construction: 1. Work as a team 2. Vet each other’s work at every juncture 3. Vet the stimulus/input material ◦ Appropriateness ◦ Balance and bias 4. After the test items are written, vet each component. 5. Finally, look at the test as a whole. ◦ Common test format: MCQ Cloze test
  • 10.
    5. PRE-TESTING PURPOSE  Helpsto identify poor distractors  Gives the test writer a chance to improve poor items PRINCIPLES FOR PRE-TESTING  The tester should administer the newly-developed test to a group of examinees similar to the target group and the purpose is to analyze every individual item as well as the whole test.  Numerical data (test results) should be collected to check the efficiency of the item, it should include item facility and discrimination.
  • 11.
    6.TEST ADMINISTRATION  Guidelinesto consider to ensure that the actual administration of the test accomplishes everything you want to: 1. when and where the test will be administered? 2. how will it be administered? 3. who will administer the test? 4. what facilities/apparatus would be necessary for the successful administration of the test?
  • 12.
    7. SCORING ANDREPORTING Scoring  The scoring plan reflects the relative weight place on each section and on the items on each section.  Objective tests have a pre-determined answer, however in subjective tests many decisions have to be made. Reporting  The most common way is in terms of grades.  Sometimes pupils are ran ordered according to the scores they obtained and a class position is recorded.
  • 18.
    ITEM ANALYSIS Definition It isstatistical technique which is used for selecting & rejecting the items of test on basis of their difficulty value and the discrimination power. It is scientific way for improving the quality of test. Purpose To increase the effectiveness of test To obtain information about difficulty level of all item To select the appropriate item for the final draft
  • 19.
    Types of ItemAnalysis  1. Qualitative  2. Quantitative Item analysis is done for obtaining 1. Difficulty value 2. Discrimination Power
  • 20.
    Item Analysis Difficulty LevelIndex is simply the percentage of students who answer an item analysis Donated by P Ranges from 0-100 Formula: P= (H+L)/N * 100
  • 21.
    Item Analysis Item Discriminationrefers to ability of an item to differentiate among students on basis of intelligent and weak students Range of Item Discrimination= +1- -1 Formula: I.D= (RH-RL)/N.H or N.L
  • 22.
    ITEM ANALYSIS 2. Thediscrimination index to find out how well a test managed to separate the good students from the poor students.  Perfect discrimination is the score of 1. This means all the good candidates got the item correct and all the poor students got the item wrong.This score is seldom obtained.  Generally, an item that has a discrimination index of less than 3 is not considered good and may be remove from the test.  Negative discrimination index means that it is easier for poor students than for good students.
  • 23.
    TUTORIAL Pairwork:  Draw upa table of specifications that reflect both the purpose and the objectives of the test  Discuss the importance of test specifications for the purpose of assessment
  • 24.
    References  Brown, H.D., & Abeywickrama, P. (2004). Language assessment. Principles and Classroom Practices.White Plains, NY: Pearson Education, 20.  Chitravelu, N., Sithamparam, S., & Teh, S. C. (2005). ELT methodology: Principles and practice. Oxford Fajar.