CHARACTERISTICS OF
EVALUATION TOOLS
Evaluation tools will not be of much use if they
cannot provide consistent result. An evaluation
tool should also measure what it intends to
measure.
Moreover it should be economical both in terms of
time and money, besides ensuring coverage of all
study units. These are some of the essential
attributes of a good evaluation tool. These
characteristics or attributes are variously named as
reliability, validity, practicability, etc.; which are
being described in this unit.
INTRODUCTION
1. Validity: The validity of a test is the extent to how well it fulfills
the function for which it is being used.
2. Reliability: Reliability of a test is its consistency with which it
measures what it claims to be measuring . Reliability is a
necessary but not a sufficient condition for validity.
3. Practicability: This is the one of the important attributes of a
good test besides validity and reliability. It includes all
practical considerations like availability of a test, cost, mode of
administration, etc., that are taken in to account in order to
use a particular test.
IMPORTANT CHARACTERISTICS
4. Comprehensiveness: A test should cover all
possible units of study i.e., items / questions
should be framed from every unit in order to
make it comprehensive.
IMPORTANT CHARACTERISTICS
In determining the validity of an evaluation tool, the different
types of validity may be required to be considered that are being
discussed here.
1. Face Validity
2. Content Validity
3. Predictive validity
4. Construct Validity
TYPES OF VALIDITY
Face Validity answers the question “ how do the test
items look like in the light of the objective of the test?’ This means
it answers the reasonableness of the items of a test in context to
the background of the tastes that the test is meant for.
This is determined by assuring the relevance,
adequacy and coverage that the test item stands for.
A test is said to have face validity if on first
impression it appears to measure the intended content or trait.
FACE VALIDITY
Content Validity determines the degree of relationship
between the curriculum course content or units of study and the
items of the test. This is usually done by developing table of
specification. A test which is said to have been content validated
ensures a high degree of correlation between the course content and
the test items. Content validity answers the question, “To what
extent does the test require demonstration by the student of the
achievements which constitute the objectives of instruction in that
area?”
The content validity of a test must always be viewed in
relation to the particular objectives to be assessed.
CONTENT VALIDITY
Predictive Validity predicts some subsequent measure of
performance i.e. , the results of a test with predictive validity can be
used for predicting some future outcomes. When we are
interested in finding out how the results of a test can be used for
predicting a future outcome, we expect a high degree of relationship
between the test scores and the criterion measure of success for the
future task, predictive validity ensures this. The higher the
degree of correlation between the test scores and the future criterion
measures the more effective the test is as a predictor.
As a future criterion is used in the determination of predictive
validity and due to the empirical nature of such an exercise,
predictive validity is sometimes also referred to as an empirical or
criterion related validity. Some authors name predictive validity as
a subclass of criterion-related validity
PREDICTIVE VALIDITY
When a test is used to describe the degree to which a testee
manifests an abstract psychological trait or ability, construct
validity is taken into consideration. Construct Validity of a
test informs something meaningful about a person’s trait such as
introversion, interest, attitude, etc. The term construct refers to
non-observable , postulated variables that have evolved either
informally or from psychological theory.
Construct validation is an analysis of the meaning of test
scores in terms psychological constructs.There are more than one
criterion used in construct validation.
CONSTRACT VALIDITY
In this unit the three important types of reliability that may be of
use to a classroom teacher as an evaluator have been described.
i) Scorer Reliability
ii) Content Reliability
iii) Temporal Reliability
TYPES OF RELIABILITY
Scorer reliability refers to the degree of agreement between
two scorers scoring the same test-answer script, hence it is also
called “inter-scorers’ reliability”
Scorer reliability also refers to the degree of consistency in
grading the same test-answer script by the same scorer on two
different occasions, in such a case it is also called as “intra -
scorers’ reliability”.
SCORER RELIABILITY
The ability of all the items of a test to measure
competencies in the same general content area is determined by
content reliability.
A content reliable test is one whose items focus on related
content area such that they will be capable of measuring similar or
related traits.
This means that content reliability concerns itself with the
internal consistency measures of the items of a test.
CONTENT RELIABILITY
Temporal Reliability is related to the stability of the results
of a test over time. This is the third major dimension of reliability.
Temporal reliability is assessed by giving the same test
twice to the same sample at two different times and by correlating
the results derived from such tests.
TEMPORAL RELIABILITY
Also referred to as usability by some, it is determined only after
the reliability and validity of a test have been determined.
Convenience of administering a test and scoring the answer
scripts, economy of use, and interpretability of test scores all
determine the practicability of a test.
PRACTICABILITY
The two characteristics, namely reliability and validity are
the most important and essential attributes of a good evaluation
tool.
The validity of a test is the accuracy with which meaningful
and relevant measurements can be made, and it actually measures
the traits it was intended to measure.
A reliable test should measure consistently the parameter
of interest to the evaluator every time he / she uses it.
Reliability is usually expressed in the form of a coefficient
of correlation.
CONCLUSION
THANKS

5. characteristics of Evaluation tools.pptx

  • 1.
  • 2.
    Evaluation tools willnot be of much use if they cannot provide consistent result. An evaluation tool should also measure what it intends to measure. Moreover it should be economical both in terms of time and money, besides ensuring coverage of all study units. These are some of the essential attributes of a good evaluation tool. These characteristics or attributes are variously named as reliability, validity, practicability, etc.; which are being described in this unit. INTRODUCTION
  • 3.
    1. Validity: Thevalidity of a test is the extent to how well it fulfills the function for which it is being used. 2. Reliability: Reliability of a test is its consistency with which it measures what it claims to be measuring . Reliability is a necessary but not a sufficient condition for validity. 3. Practicability: This is the one of the important attributes of a good test besides validity and reliability. It includes all practical considerations like availability of a test, cost, mode of administration, etc., that are taken in to account in order to use a particular test. IMPORTANT CHARACTERISTICS
  • 4.
    4. Comprehensiveness: Atest should cover all possible units of study i.e., items / questions should be framed from every unit in order to make it comprehensive. IMPORTANT CHARACTERISTICS
  • 5.
    In determining thevalidity of an evaluation tool, the different types of validity may be required to be considered that are being discussed here. 1. Face Validity 2. Content Validity 3. Predictive validity 4. Construct Validity TYPES OF VALIDITY
  • 6.
    Face Validity answersthe question “ how do the test items look like in the light of the objective of the test?’ This means it answers the reasonableness of the items of a test in context to the background of the tastes that the test is meant for. This is determined by assuring the relevance, adequacy and coverage that the test item stands for. A test is said to have face validity if on first impression it appears to measure the intended content or trait. FACE VALIDITY
  • 7.
    Content Validity determinesthe degree of relationship between the curriculum course content or units of study and the items of the test. This is usually done by developing table of specification. A test which is said to have been content validated ensures a high degree of correlation between the course content and the test items. Content validity answers the question, “To what extent does the test require demonstration by the student of the achievements which constitute the objectives of instruction in that area?” The content validity of a test must always be viewed in relation to the particular objectives to be assessed. CONTENT VALIDITY
  • 8.
    Predictive Validity predictssome subsequent measure of performance i.e. , the results of a test with predictive validity can be used for predicting some future outcomes. When we are interested in finding out how the results of a test can be used for predicting a future outcome, we expect a high degree of relationship between the test scores and the criterion measure of success for the future task, predictive validity ensures this. The higher the degree of correlation between the test scores and the future criterion measures the more effective the test is as a predictor. As a future criterion is used in the determination of predictive validity and due to the empirical nature of such an exercise, predictive validity is sometimes also referred to as an empirical or criterion related validity. Some authors name predictive validity as a subclass of criterion-related validity PREDICTIVE VALIDITY
  • 9.
    When a testis used to describe the degree to which a testee manifests an abstract psychological trait or ability, construct validity is taken into consideration. Construct Validity of a test informs something meaningful about a person’s trait such as introversion, interest, attitude, etc. The term construct refers to non-observable , postulated variables that have evolved either informally or from psychological theory. Construct validation is an analysis of the meaning of test scores in terms psychological constructs.There are more than one criterion used in construct validation. CONSTRACT VALIDITY
  • 10.
    In this unitthe three important types of reliability that may be of use to a classroom teacher as an evaluator have been described. i) Scorer Reliability ii) Content Reliability iii) Temporal Reliability TYPES OF RELIABILITY
  • 11.
    Scorer reliability refersto the degree of agreement between two scorers scoring the same test-answer script, hence it is also called “inter-scorers’ reliability” Scorer reliability also refers to the degree of consistency in grading the same test-answer script by the same scorer on two different occasions, in such a case it is also called as “intra - scorers’ reliability”. SCORER RELIABILITY
  • 12.
    The ability ofall the items of a test to measure competencies in the same general content area is determined by content reliability. A content reliable test is one whose items focus on related content area such that they will be capable of measuring similar or related traits. This means that content reliability concerns itself with the internal consistency measures of the items of a test. CONTENT RELIABILITY
  • 13.
    Temporal Reliability isrelated to the stability of the results of a test over time. This is the third major dimension of reliability. Temporal reliability is assessed by giving the same test twice to the same sample at two different times and by correlating the results derived from such tests. TEMPORAL RELIABILITY
  • 14.
    Also referred toas usability by some, it is determined only after the reliability and validity of a test have been determined. Convenience of administering a test and scoring the answer scripts, economy of use, and interpretability of test scores all determine the practicability of a test. PRACTICABILITY
  • 15.
    The two characteristics,namely reliability and validity are the most important and essential attributes of a good evaluation tool. The validity of a test is the accuracy with which meaningful and relevant measurements can be made, and it actually measures the traits it was intended to measure. A reliable test should measure consistently the parameter of interest to the evaluator every time he / she uses it. Reliability is usually expressed in the form of a coefficient of correlation. CONCLUSION
  • 16.