Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
CHARACTERISTICS
OF A GOOD TEST
TEST
 a formal and systematic instrument,
usually paper and pencil procedure
designed to assess the quality, ability, ski...
LDIIYVTA
•VALIDITY
 DEFINITION:
 “Validity is the extent to which a test
measures what it claims to measure. It is
vital for a te...
Other definitions given by experts
Gronlund and Linn (1995)- “ Validity refers
to the appropriateness of the interpretati...
Ebel and Frisbie (1991)- “ The term
validity, when applied to a set of test
scores, refers to the consistency
(accuracy) ...
•TYPES OF VALIDITY
1. Face Validity:
- it is the extent to which the
measurement method appears “on its
face” to measure ...
EXAMPLE:
People might have negative reactions
to an intelligence test that did not
appear to them to be measuring their
...
 2. Content Validity:
-it is the extent to which the
measurement method covers the entire
range of relevant behaviors, th...
3. Criterion-based Validity:
-it is the extent to which people’s
scores are correlated with other variables
or criteria th...
Example:
 An IQ test should correlate positively with
school performance.
An occupational aptitude test should
correlat...
•TYPES OF CRITERION VALIDITY:
 3.1. Predictive Validity:
-describes the future performance of
an individual by correlatin...
 3.2. Concurrent Validity:
-describes the present status of the
individual by correlating the sets of
scores obtained fro...
4. Construct Validity
-is established statistically by comparing
psychological traits or factors that
theoretically influ...
TYPES OF CONSTRUCT VALIDITY:
4.1 Convergent Validity
-is established if the instrument defines
another similar trait othe...
4.2 Divergent Validity
- is established if an instrument can
describe only the intended trait and not
the other traits.
E...
Nature of Validity
1. Validity refers to the appropriateness of the
test results but not to the instrument itself.
2. Vali...
Factors Affecting Validity :-
1. Factors in the test:
(i) Unclear directions to the students to
respond the test.
(ii) Dif...
2. Factors in Test Administration and
Scoring
(i) Unfair aid to individual students, who
ask for help.
(ii) Cheating by th...
3. Factors related to Testee
(i) Test anxiety of the students.
(ii) Physical and Psychological state of
the pupil
(iii) Re...
LAIERLIYTIB
• RELIABILITY
 refers to the consistency of
measurement; that is, how consistent
test results or other assessment results...
Other definitions given by experts
Gronlund and Linn (1995)-” reliability
refers to the consistency of measurement-
that ...
C.V. Good (1973)-has defined reliability
as the “ worthiness with which a
measuring device measures something;
the degree...
Nature of Reliability
1. Reliability refers to consistency of the results
obtained with an instrument but not the
instrume...
Four methods of determining
reliability
(a) Test-Retest method.
(b) Equivalent forms/Parallel forms
method.
(c) Split-h...
Test-Retest method:
This is the simplest method of determining
the test reliability.
To determine reliability in this me...
Equivalent Forms/Parallel Forms Method:
In this process two parallel forms of tests
are administered to the same group of...
Split-Half Method:
In this method a test is administered to a
group of pupils in usual manner. Then the
test is divided i...
Rational Equivalent/Kuder Richardson Method:
This method also provides a measure of
internal consistency. It neither requ...
Factors affecting reliability:-
1. Factors related to test:
(i) length of the test
(ii) content of the test
(iii) characte...
2. Factors related to testee:
(i) Heterogeneity of the group
(ii) Test wiseness of the students
(iii) Motivation of the st...
BIJETCIYOTV
• OBJECTIVITY
 refers to the agreement of two or more
raters or test administrators concerning
the score of the student.
Other definitions given by experts:
C.V. Good (1973) defines objectivity in
testing is “the extent to which the
instrumen...
Gronlund and Linn (1995) states
“Objectivity of a test refers to the degree to
which equally competent scores obtain the
...
Two aspects of objectivity which should be kept
in mind while constructing a test
1. Objectivity in Scoring
-means same p...
2. Objectivity of Test Items
-means that the item must call for a
definite single answer. Well­con­structed
test items sh...
INESRASF
• FAIRNESS
- means the test item should not have
any biases. It should not be offensive
to any examinee subgroup.
-a test ...
The key to fairness are as follows:
Students have knowledge of learning targets
and assessment.
Students are given equal...
RBAILISOTCY
• SCORABILITY
- means that the test should be easy
to score, direction for scoring should be
clearly stated in the instruc...
QAUYDCEA
• ADEQUACY
-means that the test should contain a
wide range of sampling of items to
determine the educational outcomes or
...
NIMRSTBIADATYLI
• ADMINISTRABILITY
- means that the test should be
administered uniformly to all students so
that the scores obtained will...
TILACYTIAPCR
DAN
IECCYNFEFI
• PRACTICALITY AND EFFICIENCY
-refers to the teacher’s familiarity with
the methods used, time required for the
assessment...
ECANABL
• BALANCE
-a balanced assessment sets targets in all
domains of learning (cognitive, affective and
psychomotor) or domains...
THANK YOU!
Upcoming SlideShare
Loading in …5
×

Characteristics of a good test

10,340 views

Published on

characteristics of a good test

Published in: Education
  • Be the first to comment

Characteristics of a good test

  1. 1. CHARACTERISTICS OF A GOOD TEST
  2. 2. TEST  a formal and systematic instrument, usually paper and pencil procedure designed to assess the quality, ability, skill or knowledge of the students by giving a set of question in uniform manner one of the many types of assessment procedure used to gather information about the performance of students
  3. 3. LDIIYVTA
  4. 4. •VALIDITY  DEFINITION:  “Validity is the extent to which a test measures what it claims to measure. It is vital for a test to be valid in order for the results to be accurately applied and interpreted.”
  5. 5. Other definitions given by experts Gronlund and Linn (1995)- “ Validity refers to the appropriateness of the interpretation made from test scores and other evaluation results with regard to a particular use.” Anne Anastasi (1969) writes “ the validity of a test concerns what the test measures and how well it does so.”
  6. 6. Ebel and Frisbie (1991)- “ The term validity, when applied to a set of test scores, refers to the consistency (accuracy) with which the scores measure a particular cognitive ability of interest.” C.V. Good (1973)- in the dictionary of education defines validity as the “ extent to which a test or other measuring instrument fulfills the purpose for which it is used.”
  7. 7. •TYPES OF VALIDITY 1. Face Validity: - it is the extent to which the measurement method appears “on its face” to measure the construct of interest. -is done by examining the physical appearance of the instrument to make it readable and understandable
  8. 8. EXAMPLE: People might have negative reactions to an intelligence test that did not appear to them to be measuring their intelligence.
  9. 9.  2. Content Validity: -it is the extent to which the measurement method covers the entire range of relevant behaviors, thoughts, and feelings that define the construct being measured. -is done through a careful and critical examination of the objectives of assessment to reflect the curricular objectives.
  10. 10. 3. Criterion-based Validity: -it is the extent to which people’s scores are correlated with other variables or criteria that reflect the same construct. -is established statistically such that a set of scores revealed by the measuring instrument is correlated with the scores obtained in another external predictor or measure
  11. 11. Example:  An IQ test should correlate positively with school performance. An occupational aptitude test should correlate positively with work performance
  12. 12. •TYPES OF CRITERION VALIDITY:  3.1. Predictive Validity: -describes the future performance of an individual by correlating the sets of scores obtained from two measures given at a longer time interval -when the criterion is something that will happen or be assessed in the future, this is called predictive validity.
  13. 13.  3.2. Concurrent Validity: -describes the present status of the individual by correlating the sets of scores obtained from two measures given at a close interval -when the criterion is something that is happening or being assessed at the same time as the construct of interest, it is called concurrent validity.
  14. 14. 4. Construct Validity -is established statistically by comparing psychological traits or factors that theoretically influence scores in a test
  15. 15. TYPES OF CONSTRUCT VALIDITY: 4.1 Convergent Validity -is established if the instrument defines another similar trait other than what it is intended to measure. E.g. Critical Thinking Test may be correlated with Creative Thinking Test.
  16. 16. 4.2 Divergent Validity - is established if an instrument can describe only the intended trait and not the other traits. E.g. Critical Thinking Test may not be correlated with Reading Comprehension Test.
  17. 17. Nature of Validity 1. Validity refers to the appropriateness of the test results but not to the instrument itself. 2. Validity does not exist on an all-or-none basis but it is a matter of degree. 3. Tests are not valid for all purposes. Validity is always specific to particular interpretation. 4. Validity is not of different types. It is a unitary concept. It is based on various types of evidence.
  18. 18. Factors Affecting Validity :- 1. Factors in the test: (i) Unclear directions to the students to respond the test. (ii) Difficulty of the reading vocabulary and sentence structure. (iii) Too easy or too difficult test items. (iv) Ambiguous statements in the test items. (v) Inappropriate test items for measuring a particular outcome. (vi) Inadequate time provided to take the test
  19. 19. 2. Factors in Test Administration and Scoring (i) Unfair aid to individual students, who ask for help. (ii) Cheating by the pupils during testing. (iii) Unreliable scoring of essay type answer. (iv) Insufficient time to complete the test. (v) Adverse physical and psychological condition at the time of testing.
  20. 20. 3. Factors related to Testee (i) Test anxiety of the students. (ii) Physical and Psychological state of the pupil (iii) Response set– a consistent tendency to follow a certain pattern in responding the items.
  21. 21. LAIERLIYTIB
  22. 22. • RELIABILITY  refers to the consistency of measurement; that is, how consistent test results or other assessment results from one measurement to another
  23. 23. Other definitions given by experts Gronlund and Linn (1995)-” reliability refers to the consistency of measurement- that is, how consistent test scores or other evaluation results are from one measurement to other”. Ebel and Frisbie (1991)- “ the term reliability means the consistency with which a set of test scores measure whatever they do measure”.
  24. 24. C.V. Good (1973)-has defined reliability as the “ worthiness with which a measuring device measures something; the degree to which a test or other instrument of evaluation measures consistently whatever it does in fact measure”. Davis (1946) “ the degree of relative precisions of measurement of a set of test score is defined reliability”.
  25. 25. Nature of Reliability 1. Reliability refers to consistency of the results obtained with an instrument but not the instrument itself. 2. Reliability refers to a particular interpretation of test scores. 3.Reliability is a statistical concept to determine reliability we administer a test to a group once or more than once. 4. Reliability is necessary but not a sufficient condition for validity.
  26. 26. Four methods of determining reliability (a) Test-Retest method. (b) Equivalent forms/Parallel forms method. (c) Split-half method. (d) Rational Equivalence/Kuder- Richardson method.
  27. 27. Test-Retest method: This is the simplest method of determining the test reliability. To determine reliability in this method the test is given and repeated on same group. Then the correlation between the first set of scores and second set of scores is obtained.
  28. 28. Equivalent Forms/Parallel Forms Method: In this process two parallel forms of tests are administered to the same group of pupils in short interval of time, then the scores of both the tests are cor­related. This correlation provides the index of equivalence.
  29. 29. Split-Half Method: In this method a test is administered to a group of pupils in usual manner. Then the test is divided into two equivalent values and correlation for these half­tests are found.
  30. 30. Rational Equivalent/Kuder Richardson Method: This method also provides a measure of internal consistency. It neither requires administration of two equivalent forms of tests nor it requires to split the tests into two equal halves. Reliability coefficient is determined by using the Kuder­Richardson formula­20 which reads like this.
  31. 31. Factors affecting reliability:- 1. Factors related to test: (i) length of the test (ii) content of the test (iii) characteristics of items (iv) spread of scores
  32. 32. 2. Factors related to testee: (i) Heterogeneity of the group (ii) Test wiseness of the students (iii) Motivation of the students 3. Factors related to testing procedures: (i) Time limit of test (ii) Cheating opportunity given to the students
  33. 33. BIJETCIYOTV
  34. 34. • OBJECTIVITY  refers to the agreement of two or more raters or test administrators concerning the score of the student.
  35. 35. Other definitions given by experts: C.V. Good (1973) defines objectivity in testing is “the extent to which the instrument is free from personal error (personal bias), that is subjectivity on the part of the scorer”.
  36. 36. Gronlund and Linn (1995) states “Objectivity of a test refers to the degree to which equally competent scores obtain the same results. So a test is considered objective when it makes for the elimination of the scorer’s personal opinion and bias judgement. In this con­text there are two aspects of objectivity which should be kept in mind while constructing a test.”
  37. 37. Two aspects of objectivity which should be kept in mind while constructing a test 1. Objectivity in Scoring -means same person or different persons scoring the test at any time arrives at the same result without may chance error. ­a test to be objective must necessarily so worded that only correct answer can be given to it.
  38. 38. 2. Objectivity of Test Items -means that the item must call for a definite single answer. Well­con­structed test items should lead themselves to one and only one interpretation by students who know the material involved. It means the test items should be free from ambiguity.
  39. 39. INESRASF
  40. 40. • FAIRNESS - means the test item should not have any biases. It should not be offensive to any examinee subgroup. -a test can only be good if it is fair to all the examinees -a fair assessment provides all students with an equal opportunity to demonstrate achievement
  41. 41. The key to fairness are as follows: Students have knowledge of learning targets and assessment. Students are given equal opportunity to learn. Students possess the pre-requisite knowledge and skills. Students are free from teacher stereotypes. Students are free from biased assessment task and procedures.
  42. 42. RBAILISOTCY
  43. 43. • SCORABILITY - means that the test should be easy to score, direction for scoring should be clearly stated in the instruction. Provide the students an answer sheet and the answer key for the one who will check the test.
  44. 44. QAUYDCEA
  45. 45. • ADEQUACY -means that the test should contain a wide range of sampling of items to determine the educational outcomes or abilities so that the resulting scores are representatives of the total performance in the areas measured.
  46. 46. NIMRSTBIADATYLI
  47. 47. • ADMINISTRABILITY - means that the test should be administered uniformly to all students so that the scores obtained will not vary due to factors other than differences of the student’s knowledge ad skills. There should be a clear provision for instruction for the students, proctors and even the one who will check the test or the test scorer.
  48. 48. TILACYTIAPCR DAN IECCYNFEFI
  49. 49. • PRACTICALITY AND EFFICIENCY -refers to the teacher’s familiarity with the methods used, time required for the assessment, complexity of the administration, ease of scoring, ease of interpretation of the test results and the materials used must be at the lowest cost.
  50. 50. ECANABL
  51. 51. • BALANCE -a balanced assessment sets targets in all domains of learning (cognitive, affective and psychomotor) or domains of intelligence (verbal-linguistic, logical-mathematical, bodily kinesthetic , visual- spatial, musical-rhythmic, intrapersonal-social, interpersonal- introspection, physical world-natural, existential-spiritual) -makes use of both traditional and alternative assessment
  52. 52. THANK YOU!

×