It is a Presentation on the Meaning, types, methods of establishing validity, the factors influencing validity and how to increase the validity of a tool
call girls in Kamla Market (DELHI) š >ą¼9953330565š genuine Escort Service šāļøāļø
Ā
Validity of a Research Tool
1. VALIDITY OF A RESEARCH TOOL
Meaning, Methods of establishing Validity, Factors
influencing Validity and Research Tool: measures to
increase the Validity of a Tool.
By
Joby Varghese
2. 1. Introduction
2. Understanding āValidityā
3. Definition
4. Characteristics of Validity
5. Nature of Validity
6. Types of Validity
6.1.Content Validity
6.2.Face Validity
6.3.Construct Validity
6.3.1. Convergent Validity
6.3.2. Discriminant Validity
6.4.Criterion-related Validity
6.4.1. Concurrent Validity
6.4.2. Predictive Validity
6.5.Consequential Validity
6.6.Known-group Validity
7. Factors influencing Validity
of a Research Tool
8. Measures to increase Validity
of a Tool
9. Conclusion
Table of Contents
3. 1. Introduction
Validity in research refers to how accurately a
study answers the study question or the strength
of the study conclusions.
Here validity refers to how well the assessment
tool actually measures the underlying outcome of
interest. Validity is not a property of the tool itself,
but rather of the interpretation or specific purpose
of the assessment tool with particular settings and
learners.
Assessment instruments must be both reliable and
valid for study results to be credible.
4.
5. 2. Understanding āValidityā
āValidity refers to how well an instrument
measures what it is intended to measure.
Validity is the extent to which a test measures
what it claims to measure.
It is vital for a test to be valid in order for the
results to be accurately applied and
interpreted.
Validity isnāt determined by a single statistic,
but by a body of research that demonstrates
the relationship between the test and the
behaviour it is intended to measure.
6. Ross defines Validity as follows:
One kind of validity concerns the degree to which the test or other
measuring instrument, measures what it claims to. In a word validity
measures truthfulness.
Cureton says,
Validity is therefore defined in terms of the correlation between the
actual test scores and true criterion scores.
Gullikson defines it in a more particular form when he says
The validity of a test is the correlation of the test with some criterion.
3. Definition
7. According to Freeman (2006)
An index of validity shows the degree to which a test measures what it purports to
measure when compared with accepted criteria.
Guilford explains the meaning of validity in statistical terms as follows:
...What a test measures, in common with the other test and other measures, is in
variance, then, is the basis for validity.
Cook and Campbell (1979) defines validity
Best available approximation to the truth or falsity of a given inference, proposition
or conclusion.
According to Blumberg et. al., (2005)
Validity is often defined as the extent to which an instrument measures what it
asserts to measure.
Contdā¦
8. 4. Characteristics of Validity
1. It is an important characteristic of a measuring instrument
or a test.
2. It is a measure of constant error while reliability is the
measure of variable error.
3. It is an index of external correlates. The test scores are
correlated with external criterion scores.
4. The criterion may be a set of operations or success in the
job or as a predictor for future course of test scores.
5. It relates the objective of a test score.
9. Contdā¦
6. It connotes the psychological construct of a variable
which is indirectly measured with the help of
behaviours.
7. It ensures the reliability of a test. Thus, if a test is valid,
it must be reliable.
8. It refers the truthfulness of a test score.
9. It is the function of a length.
10. It determines how an individual performs, in different
situations.
10. 5. Nature of Validity
There are certain cautions that one has to keep in
mind while using the term Validity in evaluation:
1. Validity is a matter of degree and hence does
not exist on an all-or-none basis.
2. Validity refers to the results of a test or
evaluation tool for a given group of
individuals, not the tool itself.
3. Validity being a relative term, a tool would be
valid for a particular situation. This means a
particular tool is not valid in every situation.
12. 6.2.Face Validity
ā¢ It is the appearance of validity,
āapparent validityā.
ā¢ It refers to whether the tool looks like
it measures what it is supposed to
measure.
ā¢ It is an estimate of whether a test
āappears' to measure a certain
criterion; it does not guarantee that
the test actually measures phenomena
in that domain.
13. Contdā¦
For example, if you are trying to assess
the face validity of a psychological
ability measure, it would be more
convincing if you sent the test to a
carefully selected sample of experts on
psychological ability testing and they all
reported back with the judgment that
your measure appears to be a good
measure of psychology ability.
advantage disadvantage
14. 6.1.Content Validity
ā¢ It measures the extent to which items on
a tool are related in a straight forward
way to the characteristics, the tool aims
to measure.
ā¢ It is the evidence or the degree to which
the content of the tool matches a content
domain associated with the construct.
ā¢ It ascertains whether the tool contains
items from the desired content domain.
15. Contdā¦
For example, in developing a teacher aptitude test, experts in the
field of education would identify the qualities, knowledge, attitude
and skills required to be an effective teacher and then choose (or
rate) items that represent those areas of qualities, knowledge,
attitude and skills.
Lawshe developed a formula termed the content validity ratio as follows:
Where,
CVR= Content validity ratio,
Ne = Number of Subject Matter Experts (SME) indicating āessentialā,
N = Total number of SME experts.
16. 6.3.Construct Validity
It refers to the totality of evidence about
whether a particular operationalization of a
construct adequately represents what is
intended by theoretical account of the
construct being measured.
In other words, it attempts to ascertain whether
the instrument measures the construct that it is
intended to measure.
They also include relationships between the
test and measures of other constructs. 2
typesā¦
17. 6.3.1. Convergent Validity
It refers to the degree to which a measure is
correlated with other measures that it is
theoretically predicted to correlate with.
It is the extent to which the scale correlates
with measures of the same or related
concepts.
For example, a new scale to measure
assertiveness should correlate with existing
measures of assertiveness and with existing
measures of related concepts like
independence.
18. 6.3.2. Discriminant Validity
Discriminant validity describes the degree
to which the operationalization does not
correlate with other operationalization that
it theoretically should not be correlated
with.
It is the extent to which a tool relates to
scores on another tool measuring
theoretically-related dimension.
For example, an assertiveness scale should
not correlate with measures of motivation.
19. 6.4.Criterion-related Validity
Criterion-related validity indicates the degree
of the relationship between the predictor (the
tool) and a criterion (level of performance the
tool is trying to predict).
It refers to the degree of consistency between
test data and some other measure of the same
trait taken either at the same time or
(concurrent validity) or at some future time
(predictive validity) and the accuracy with
which the predictor is able to predict
performance on the criterion.
20. 6.4.1. Concurrent Validity
It is reflected in the degree to which results
from two different instruments employed
to measure the same type of learning are in
agreement.
If the test data and criterion data are
collected at the same time, this is referred
to as concurrent validity evidence.
A tool has concurrent validity when those
who do well on one assessment also do
well on the other and vice versa.
21. 6.4.2. Predictive Validity
It refers to the degree to which
operationalisation can predict or correlate
with other measures of the same construct
that are measured at some time in future.
If the test data are collected first in order
to predict criterion data collected at a
later point in time, then this is referred to
as predictive validity evidence.
22. 6.5.Consequential Validity
It refers to the impact (positive or
negative) and consequences (intended or
unintended) that a particular type of
assessment has for the related teaching and
learning.
If an assessment activity, besides gauging
learners performance, also encourages
further learning or better teaching, it has
consequential validity.
23. 6.6.Known-group Validity
It refers to the extent to which a tool
distinguishes between two groups known to
differ with respect to the characteristic under
study.
This is explained in terms of the locus of
control.
For example, successful candidates in
examination or job applicants are likely to
have an internal locus of control as compared
to unsuccessful candidates who are likely to
have an external locus of control.
24.
25. 7.0. Factors influencing Validity of a Research Tool
1. Inappropriateness of test items: items that measure knowledge
cannot measure skill.
2. Direction: unclear direction reduces validity. Direction that do
not clearly indicate how the pupils should answer and record
their answers affect validity of test items.
3. Reading vocabulary and sentence structures: too difficult and
complicated vocabulary and sentence structure will not measure
what it intends to measure.
4. Level of difficulty of Items: too difficult or too easy test items
cannot discriminate between bright and slow pupils and will
therefore lower its validity.
26. Contdā¦
5. Poorly constructed test item: test items that provide clues and
items that are ambiguous confuse the students and will not reveal a
true measure.
6. Length of the test: a test should of sufficient length to measure
what it is supposed to measure. A test that is too short cannot
adequately sample the performance we want to measure.
7. Arrangement of items: test item should be arranged according to
difficulty level, with the easiest items to the difficult ones.
Difficult items when encountered earlier may cause mental block
and may also cause student to take much time in that number.
27. 8. Measures to increase Validity of a Tool
1. Make sure your goals and objectives are clearly defined
and put in operation. Expectations should be written
down.
2. Match your assessment measure to your goals and
objectives. Additionally, have the test reviewed.
3. Get respondents involved; have the students look over
the assessment for troublesome, wording, or other
difficulties.
28. Contdā¦
4. If possible, compare your measure with other
measures, or data that may be available.
5. If no assessment instruments are available, use content
experts to create your own and pilot the instrument
prior to using it in your study. Test the reliability and
include as many sources of validity evidence as are
possible in your paper. Discuss the limitations of this
approach openly.
29. 9. Conclusion
In general, validity is an indication of how
sound your research is. More specifically,
validity applies to both the design and the
methods of your research. Validity in data
collection means that your findings truly
represent the phenomenon you are claiming
to measure. Valid claims are solid claims.
Validity is one of the main concerns with
research.