Kinds of Test and
Testing
Musfera Nara Vadia
1300925
K4-13
Kinds of Tests and Testing
•Proficiency tests
•Achievements tests
•Diagnostic tests
•Placements tests
• Direct and indirect testing
• Discrete point and integrative testing
• Norm-referenced and criterion-
referenced testing
• Objective testing and subjective testing
• Computer adaptive testing
• Communicative language testing
Proficiency Tests
• Proficiency tests are designed to measure people’s ability in a
language, regardless of any training they may have had in that
language.
• This test is based on a specification of what candidates have to be
able to do in the language in order to considered proficiency.
• This test is not based on courses that candidates may previously
taken.
• For example: TOEFL test, FCE and CPE.
Achievement Tests
• Achievement tests are directly related to language courses, their
purposes being to establish how successful individual students,
group of students or the courses themselves have been in
achieving objectives.
• Two kinds of this tests:
• Final Achievement Tests
• Progress Achievement Tests
… Final Achievement Tests
• Final Achievement Tests are those administrated at the
end of course of study. They contribute to summative
assessment.
• These tests have been referred to as syllabus-content
approach which the tests’ content based directly on a
detailed course syllabus or on the books and other
materials used.
…. Progress Achievement Tests
• Progress Achievement Tests are intended to
measure the measure the progress that students are
making. (Formative assessment)
• One way of measuring progress would be repeatedly
to administer final achievement test, the increasing
scores indicating the progress made.
Diagnostic Tests
• Diagnostic tests are used to identify learners’
strengths and weaknesses. They are intended
primarily to ascertain what learning still needs to
take place.
Placement Tests
• Placement tests are intended to provide
information that will help to place students at the
stage (or in the part) of the teaching program most
appropriate to their abilities.
• Typically used to assign students to classes at
different levels.
Direct and Indirect Testing
• Direct test requires the candidate to perform precisely the skill that we
wish to measure.
• Indirect test attempts to measure the abilities that underlie the skills in
which we are interested.
• The main problem with indirect tests is that the relationship between
performance on them and performance of the skills in which we are
usually more interested tends to be rather weak in strength and uncertain
in nature.
Discrete Point and Integrative Testing
• Discrete point testing refers to the testing of one element at a
time, item by item. For example, take the form of a series of
items, each testing a particular grammatical structure.
• Integrative testing requires the candidate to combine many
language elements in the completion of a task. For example,
writing composition, making notes, etc.
Norm-referenced and Criterion-referenced
Testing
• Norm-referenced testing relates to one candidate’s
performance to that of other candidates, and not told directly
what the student is capable of doing in the language.
• For example: student A obtained score that place him in the top
10 per cent of candidates who have taken that test.
...cont..
…Cont
Criterion-referenced Testing
• Criterion-referenced test is classify people
according to whether or not they are able to perform
some task or set of task satisfactory.
• The tasks are set, those who perform them
satisfactorily ‘pass’; those who don’t, ‘fail’.
Objective testing and Subjective testing
• Objective test is if no judgment is required on the part of the scorer.
A multiple choice test, with the correct responses unambiguously
identified, would be a case in point.
• Subjective test is if judgment is called for.
The less subjective the scoring, the greater agreement there will be
between two different scorers.
Computer Adapting Testing
• All candidates are presented initially with an item of
average difficulty. Those who respond correctly are
presented with a more difficulty item; those who
respond incorrectly are presented with an easier
item.
Hughes, Arthur. 2003. Testing for Language Teachers (2nd ed.). Cambridge.
Cambridge University Press

Language Assessment : Kinds of tests and testing

  • 1.
    Kinds of Testand Testing Musfera Nara Vadia 1300925 K4-13
  • 2.
    Kinds of Testsand Testing •Proficiency tests •Achievements tests •Diagnostic tests •Placements tests • Direct and indirect testing • Discrete point and integrative testing • Norm-referenced and criterion- referenced testing • Objective testing and subjective testing • Computer adaptive testing • Communicative language testing
  • 3.
    Proficiency Tests • Proficiencytests are designed to measure people’s ability in a language, regardless of any training they may have had in that language. • This test is based on a specification of what candidates have to be able to do in the language in order to considered proficiency. • This test is not based on courses that candidates may previously taken. • For example: TOEFL test, FCE and CPE.
  • 4.
    Achievement Tests • Achievementtests are directly related to language courses, their purposes being to establish how successful individual students, group of students or the courses themselves have been in achieving objectives. • Two kinds of this tests: • Final Achievement Tests • Progress Achievement Tests
  • 5.
    … Final AchievementTests • Final Achievement Tests are those administrated at the end of course of study. They contribute to summative assessment. • These tests have been referred to as syllabus-content approach which the tests’ content based directly on a detailed course syllabus or on the books and other materials used.
  • 6.
    …. Progress AchievementTests • Progress Achievement Tests are intended to measure the measure the progress that students are making. (Formative assessment) • One way of measuring progress would be repeatedly to administer final achievement test, the increasing scores indicating the progress made.
  • 7.
    Diagnostic Tests • Diagnostictests are used to identify learners’ strengths and weaknesses. They are intended primarily to ascertain what learning still needs to take place.
  • 8.
    Placement Tests • Placementtests are intended to provide information that will help to place students at the stage (or in the part) of the teaching program most appropriate to their abilities. • Typically used to assign students to classes at different levels.
  • 9.
    Direct and IndirectTesting • Direct test requires the candidate to perform precisely the skill that we wish to measure. • Indirect test attempts to measure the abilities that underlie the skills in which we are interested. • The main problem with indirect tests is that the relationship between performance on them and performance of the skills in which we are usually more interested tends to be rather weak in strength and uncertain in nature.
  • 10.
    Discrete Point andIntegrative Testing • Discrete point testing refers to the testing of one element at a time, item by item. For example, take the form of a series of items, each testing a particular grammatical structure. • Integrative testing requires the candidate to combine many language elements in the completion of a task. For example, writing composition, making notes, etc.
  • 11.
    Norm-referenced and Criterion-referenced Testing •Norm-referenced testing relates to one candidate’s performance to that of other candidates, and not told directly what the student is capable of doing in the language. • For example: student A obtained score that place him in the top 10 per cent of candidates who have taken that test. ...cont..
  • 12.
    …Cont Criterion-referenced Testing • Criterion-referencedtest is classify people according to whether or not they are able to perform some task or set of task satisfactory. • The tasks are set, those who perform them satisfactorily ‘pass’; those who don’t, ‘fail’.
  • 13.
    Objective testing andSubjective testing • Objective test is if no judgment is required on the part of the scorer. A multiple choice test, with the correct responses unambiguously identified, would be a case in point. • Subjective test is if judgment is called for. The less subjective the scoring, the greater agreement there will be between two different scorers.
  • 14.
    Computer Adapting Testing •All candidates are presented initially with an item of average difficulty. Those who respond correctly are presented with a more difficulty item; those who respond incorrectly are presented with an easier item.
  • 15.
    Hughes, Arthur. 2003.Testing for Language Teachers (2nd ed.). Cambridge. Cambridge University Press