2. Reliability
⢠The quality and adequacy of quantities
data can only be asses by establishing the
reliability of an instrument
⢠It is the degree of consistency with which
the attributes or variables are measured
by an instrument.
⢠Reliability refers to the consistency of a
measure.
3. Definition
⢠It is degree of consistency and accuracy with which
an instrument measures the attributes for which it
is design to measure
⢠Reliability is the degree to which an assessment
tool produces stable and consistent results.
Measuring of reliability
⢠There are several ways are their to measure the
reliability for research tools. It depends on factor
like
⢠Nature of instrument and aspects of reliability the
researcher wants to measure
4. Important aspects in case of quantitative
research
⢠Stability
⢠Internal consistency
⢠Equivalence
Stability-
It means research instrument provides same
results when it is used consecutively for two or
more times.
Stability is estimated to make sure that research
instrument is consistent in providing similar
results with repeated administration
5. ⢠It is also known as test-retest
reliability. is a measure of reliability
obtained by administering the same test
twice over a period of time to a group of
individuals.
⢠The test is administered twice at two
different points of time
⢠It is used to assess the consistency of a test
across time
⢠It is used for- questionnaire, observational
check list, observation-rating scales and
physiological measurement tools
6. It has one basic problem, there are
many traits which may change with
time
⢠Attitude
⢠Behavior
⢠Mood
⢠Knowledge
⢠Satisfaction and physical conditions
And soon
7. Statistical calculation[test-retest method]
Procedure or steps
⢠Administer research instrument to a sample of
subjects on two different method
⢠Scores of a tool administered at two different
occasions are compared and calculated by using
correlation coefficient
⢠The result reveals the relationship between
scores generated by research instrument at two
separate occasions
8.
9. Interpretation of result
The result ranges between â1.oo
through and +1.00 the results
are
⢠+1.00 score indicate perfect reliability
⢠0.00 score indicate no reliability
⢠A score above 0.70 indicates an
acceptable level of reliability of tool
10. Internal consistency
⢠It is also called homogeneity
⢠It ensures that all the subparts of a
reserach instrument measure the same
characteristics
⢠The instrument should be specific not
including the other aspects of the topic
⢠Ex-patient satisfaction related to nursing
care[ not related to health care or hospital]
11. ⢠One of the best method for approaching of
assessing the internal consistency is split- half
method
⢠The formula used are Cronbachâs alpha and the
KuderâRicharden formula [oddâeven, first half-
second half etc]
Procedure of calculating split-half method
ďźDivide items of a research instrument in two
equal parts through grouping either in odd
number question and even number question or
first half and second half item groups
12. ⢠Administer two sub-parts of the tool
simultaneously, score them independently and
compare correlation coefficient on two separate
scores by using formula
⢠Split âhalf only half item-
14. ⢠Frequently used to estimate internal
consistency- Cronbachâs alpha or
coefficient alpha
15. Here
⢠R=the estimated reliability
⢠K=the total number of items in the test
⢠02 1=the variance of each individual item
⢠O y2 =the variance of the total test scores
⢠E =the sum of
16. Equivalence-
⢠This aspects of reliability is estimated when a
researcher is testing the reliability of a tool,
which is used by two different observers to
observe a single phenomenon simultaneously
and independently
⢠It is also known as interrater or inter
observer reliability
⢠Which is estimated by the administration of tool
to observe single event simultaneously and
independently by two or more trained observers
⢠r=number of agreements/number of
agreement +number of disagreements
17.
18. Validity
⢠Refers to how well a test measures what it is
purported to measure
⢠While reliability is necessary, it alone is not
sufficient. For a test to be reliable, it also needs
to be valid. For example, if your scale is off by 5
lbs, it reads your weight every day with an excess
of 5lbs. The scale is reliable because it
consistently reports the same weight every day,
but it is not valid because it adds 5lbs to your
true weight. It is not a valid measure of your
weight.
19. ⢠Validity refers to an instrument or test actually
testing what it suppose to be testing
⢠Validity refers to the degree to which an
instrument measures what it suppose to
measuring
Types of Validity
1. Face Validity ascertains that the measure
appears to be assessing the intended construct
under study.
⢠It involves an overall look of an instrument
regarding its appropriateness to measure a
particular attribute or phenomenon
20. 2 content validity-
⢠it is concerned with scope of coverage of the
content area to be measured. More often it is
applied in tests of knowledge measurements
⢠Generally this viability is measured through the
judgments of experts about the content.
3 criterion validity-
⢠is used to predict future or current performance
- it correlates test results with another criterion
of interest.
⢠This type of validity is a relationship between
measurements of the instrument with some
other external criteria
21. 4-predictive validity-
⢠It is the degree of forecasting judgment for
example some personality test on academic
futures of students can be predictive of behavior
patterns.
5-construct validity-
⢠Validity is used to ensure that the measure is
actually measure what it is intended to measure
(i.e. the construct), and not other variables.
Using a panel of âexpertsâ familiar with the
construct is a way in which this type of validity
can be assessed
22. ⢠5-Formative Validity -
⢠when applied to outcomes assessment it is used
to assess how well a measure is able to provide
information to help improve the program under
study.
⢠6-Sampling Validity -
⢠(similar to content validity) ensures that the
measure covers the broad range of areas within
the concept under study. Not everything can be
covered, so items need to be sampled from all of
the domains
23. What are some ways to improve validity?
⢠Make sure your goals and objectives are clearly
defined and operational zed. Expectations of
students should be written down.
⢠Match your assessment measure to your goals and
objectives. Additionally, have the test reviewed by
faculty at other schools to obtain feedback from an
outside party who is less invested in the
instrument.
⢠Get students involved; have the students look over
the assessment for troublesome wording, or other
difficulties.
⢠If possible, compare your measure with other
measures, or data that may be available.
24. Some controls to threats of validity include
1. Use of calibrated and proper preparation of
equipment.
2. Replication
3. Single and double blind procedures
4. Automation
5. Multiple observers
6. Use of deception (within the bounds of ethics)
7. Random subject selection
8. Control of subject-to-subject communication