The document discusses validity and reliability in research. It defines validity as the degree to which a study accurately reflects the concept being measured. There are several types of validity discussed, including content validity, construct validity, and criterion-related validity. Reliability refers to the consistency of measurements. Rater reliability and instrument reliability are examined. Methods for establishing reliability include test-retest analysis, equivalence of test forms, and measures of internal consistency such as Cronbach's alpha. Generalizability and sampling methods are also summarized.
Reliability
Reliability refers to the extent to which a scale produces consistent results, if the measurements are repeated a number of times.
Reliability is a measure of the stability or consistency of test scores.
When a measurement procedure yields consistent scores when the phenomenon being measured is not changing
Degree to which scores are free of “Measurement Error Consistency of the measurement
Example: Weighing scale used multiple times in a day by the same individual
Types of reliability
Internal consistency reliability
Test-retest reliability
Split–half method
Inter-rater reliability
Internal consistency reliability
Also known as inter-item reliability.
It is the measure of how well the items on the test measure the same construct or idea.
Cronbach's Alpha
Cronbach's Alpha are most commonly used used to measure inter-item reliability to see if questionnaires with multiple questions are reliable. Value must by above 0.7.
Test-retest reliability
Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to same group of individuals.
Test-retest reliability is the degree to which scores are consistent over time.
Same test- different times
Example: Administering the same questionnaire at 2 different times such as IQ test.
Split–half method
A method of determining the reliability of a test by dividing the whole test into two halves and scoring the two halves separately.
Especially appropriate when the test is very long.
The most used method to split the test into two is using the odd-even strategy.
Inter-rater reliability
Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree.
Inter-rater reliability is essential when making decisions in research and clinical settings.
References
Neuman, L. (2014). Social Research Methods: Qualitative and Quantitative Approaches. Pearson Education Limited.
Hello everyone, this is Vartika Verma, student of B. El. Ed 4. This presentation titled 'Reliability' is helpful for the subject 'Measurement and Evaluation' in B. El. Ed 4 and also for all the Education students. Thanking you :)
Topic: What is Reliability and its Types?
Student Name: Kanwal Naz
Class: B.Ed 1.5
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
Topic: Validity
Student Name: Parkash Mal
Class: B.Ed. (Hons) Elementary
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
Validity:
Validity refers to how well a test measures what it is purported to measure.
Types of Validity:
1. Logic valididty:
Validity which is in the form of theory, statements. It has 2 types.
I. Face Validity:
It is the extent to which the measurement method appears “on its face” to measure the construct of interest.
• Example:
• suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list
II. Content Validity:
Measuring all the aspects contributing to the variable of the interest.
Example:
For physical fitness temperature, height and stamina are supposed to be assess then a test of fitness must include content about temperatures, height and stamina.
2. Criterion
It is the extent to which people’s scores are correlated with other variables or criteria that reflect the same construct
Example:
An IQ test should correlate positively with school performance.
An occupational aptitude test should correlate positively with work performance.
Types of Criterion Validity
Concurrent validity:
• When the criterion is something that is happening or being assessed at the same time as the construct of interest, it is called concurrent validity.
• Example:
Beef test.
Predictive validity:
• A new measure of self-esteem should correlate positively with an old established measure. When the criterion is something that will happen or be assessed in the future, this is called predictive validity.
• Example:
GAT, SAT
Other types of validity
Internal Validity:
It is basically the extent to which a study is free from flaws and that any differences in a measurement are due to an independent variable and nothing else
External Validity
• It is the extent to which the results of a research study can be generalized to different situations, different groups of people, different settings, different conditions, etc.
It is a Presentation on the Meaning, types, methods of establishing validity, the factors influencing validity and how to increase the validity of a tool
These slides discuss about the concept and definition of variables, variables in research, operationalisation, types and functions of variables and measurement scales.
Reliability
Reliability refers to the extent to which a scale produces consistent results, if the measurements are repeated a number of times.
Reliability is a measure of the stability or consistency of test scores.
When a measurement procedure yields consistent scores when the phenomenon being measured is not changing
Degree to which scores are free of “Measurement Error Consistency of the measurement
Example: Weighing scale used multiple times in a day by the same individual
Types of reliability
Internal consistency reliability
Test-retest reliability
Split–half method
Inter-rater reliability
Internal consistency reliability
Also known as inter-item reliability.
It is the measure of how well the items on the test measure the same construct or idea.
Cronbach's Alpha
Cronbach's Alpha are most commonly used used to measure inter-item reliability to see if questionnaires with multiple questions are reliable. Value must by above 0.7.
Test-retest reliability
Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to same group of individuals.
Test-retest reliability is the degree to which scores are consistent over time.
Same test- different times
Example: Administering the same questionnaire at 2 different times such as IQ test.
Split–half method
A method of determining the reliability of a test by dividing the whole test into two halves and scoring the two halves separately.
Especially appropriate when the test is very long.
The most used method to split the test into two is using the odd-even strategy.
Inter-rater reliability
Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree.
Inter-rater reliability is essential when making decisions in research and clinical settings.
References
Neuman, L. (2014). Social Research Methods: Qualitative and Quantitative Approaches. Pearson Education Limited.
Hello everyone, this is Vartika Verma, student of B. El. Ed 4. This presentation titled 'Reliability' is helpful for the subject 'Measurement and Evaluation' in B. El. Ed 4 and also for all the Education students. Thanking you :)
Topic: What is Reliability and its Types?
Student Name: Kanwal Naz
Class: B.Ed 1.5
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
Topic: Validity
Student Name: Parkash Mal
Class: B.Ed. (Hons) Elementary
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
Validity:
Validity refers to how well a test measures what it is purported to measure.
Types of Validity:
1. Logic valididty:
Validity which is in the form of theory, statements. It has 2 types.
I. Face Validity:
It is the extent to which the measurement method appears “on its face” to measure the construct of interest.
• Example:
• suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list
II. Content Validity:
Measuring all the aspects contributing to the variable of the interest.
Example:
For physical fitness temperature, height and stamina are supposed to be assess then a test of fitness must include content about temperatures, height and stamina.
2. Criterion
It is the extent to which people’s scores are correlated with other variables or criteria that reflect the same construct
Example:
An IQ test should correlate positively with school performance.
An occupational aptitude test should correlate positively with work performance.
Types of Criterion Validity
Concurrent validity:
• When the criterion is something that is happening or being assessed at the same time as the construct of interest, it is called concurrent validity.
• Example:
Beef test.
Predictive validity:
• A new measure of self-esteem should correlate positively with an old established measure. When the criterion is something that will happen or be assessed in the future, this is called predictive validity.
• Example:
GAT, SAT
Other types of validity
Internal Validity:
It is basically the extent to which a study is free from flaws and that any differences in a measurement are due to an independent variable and nothing else
External Validity
• It is the extent to which the results of a research study can be generalized to different situations, different groups of people, different settings, different conditions, etc.
It is a Presentation on the Meaning, types, methods of establishing validity, the factors influencing validity and how to increase the validity of a tool
These slides discuss about the concept and definition of variables, variables in research, operationalisation, types and functions of variables and measurement scales.
·IntroductionQuantitative research methodology uses a dedu.docxlanagore871
·
Introduction
Quantitative research methodology uses a deductive reasoning process (Erford, 2015, p. 5). It is based on philosophical assumptions that are very different from those that support qualitative research. Quantitative studies fall under what is broadly described as a positivist perspective. Epistemologically, knowledge is something that is believed to be objective and measurable, and the nature of reality (that is, ontology) is such that there is one fixed, observable, and definable reality. Quantitative approaches to research emphasize the objectivity of the researcher, and because a goal is to uncover the one true reality, values (axiological assumptions) and the subjective nature of experience are not likely to be examined.
Quantitative Research Designs
Quantitative research can be categorized in different ways. Brief descriptions of some designs appear below. The chosen research design is determined by the nature of the inquiry, that is, what the researcher wants to learn by conducting the study.
Counseling Research: Quantitative, Qualitative, and Mixed Methods
thoroughly describes several major reseach.
Experimental Research
Experimental research, one of the quantitative designs, involves random selection and random assignment of subjects to two or more groups over which the researcher has control. This is what distinguishes experimental studies from the other designs. Experimental studies in counseling are not that common, because many research questions do not lend themselves to random selection and assignment for ethical reasons. Experimental studies compare the effect of one or more independent variables on one or more dependent variables. Independent variables fall into two broad categories. One type of independent variable involves measuring some characteristic inherent in the study's participants, such as their age, gender, IQ, personality traits, income, or education level. These demographic or blocking variables are not something which the researcher can manipulate, though the researcher can statistically control for them. The treatment or experimental conditions that the researcher sets up is the other type of independent variable, which is unique to experimental designs. The element of control is what permits researchers to conclude that one variable has caused a change in another variable.
Quasi-Experimental Research
Quasi-experimental research designs come in many different forms. Like experimental research, the researcher aims to compare the effect of the independent variable under their control on the dependent variable. However, the researcher does not or cannot randomly assign individual participants to treatment and control groups, so cause-and-effect relationships cannot be as strongly inferred from the results. Pre-existing conditions of one group in comparison to the other may confound the findings. An example might be a study to examine the potential effects of a new curriculum aimed at reducin.
The research paper has developed over the past three centuries into a tool to communicate the results of scientific inquiry.
The ability to accurately describe ideas, protocols/procedures, and outcomes are the pillars of scientific writing.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
2. Hypotheses
A hypothesis is a type of prediction found in many experimental studies; it is a
statement about what we expect to happen in a study.
In research reports there are generally two types of hypotheses: research
hypotheses and null hypotheses.
The null hypothesis (often written as H0) is a neutral statement used as a basis
for testing.
The null hypothesis states that there is no relationship between items under
investigation.
3. The statistical task is to
reject the null hypothesis
and to show that there is a
relationship between X and
Y.
Based on previous research
reports in the literature, we
expect a particular
outcome, we can form
research hypotheses.
4. The first is to predict that there
will be a difference between
two groups, although we do
not have sufficient information
to predict the direction of the
difference.
This is known as a
nondirectional or two-way
hypothesis.
5. On the other hand, we may have
enough information to predict a
difference in one direction or
another.
This is called a directional
or one-way hypothesis
6. VARIABLE
TYPES
In order to carry out any sort of
measurement, we need to think about
variables; that is, characteristics that
vary from person to person, text to
text, or object to object.
Simply put, variables are features or
qualities that change.
7. INDEPENDENT AND DEPENDENT VARIABLES
There are two main
variable types:
independent and
dependent.
The independent
variable is the one that
we believe may "cause"
the results; the
dependent variable is
the one we measure to
see the effects the
independent variable
has on it.
8. MODERATOR
VARIABLES
• Moderator variables are
characteristics of
individuals or of treatment
variables that may result in
an interaction between an
independent variable and
other variables
9. INTERVENING VARIABLES
• Intervening variables are similar to
moderator variables, but they are not
included in an original study either
because the researcher has not
considered the possibility of their
effect or because they cannot be
identified in a precise way.
10. CONTROL
VARIABLES
• When conducting research, one
ideally wants to study simply the
effects of the independent
variable on a dependent variable.
11. • Variables that might interfere with the findings
include the possibility that learners with different
levels of proficiency respond differently to
different types of feedback.
12. OPERATIONALIZATION
Once a variable has been
operationalized in a manner such as
this, it is possible to use it in
measurements.
An operational definition allows
researchers to operate, or work, with
the variables. Operationalizations
allow measurement.
13. MEASURING VARIABLES: SCALES OF
MEASUREMENT
• The three commonly used
scales are;
1-Nominal
2-Ordinal
3-Interval.
14. 1-Nominal scales are used for attributes or categories
and allow researchers to categorize variables into two or
more groups. With nominal scales, different categories
can be assigned numerical values.
2-An ordinal scale is one in which ordering is implied.
For example, student test scores are often ordered from
best to worst or worst to best, with the result that there
is a 1st-ranked student, a 2nd-ranked student, a l0th-
ranked student, and so forth.
3-An interval scale represents the order of a
variable's values, but unlike an ordinal scale it also
reflects the interval or distance between points in
the ranking.
15. VALIDITY
• After spending a great deal of
time and effort designing a study,
we want to make sure that the
results of our study are valid.
• That is, we want them to reflect
what we believe they reflect and
that they are meaningful in the
sense that they have significance
not only to the population that
was tested, but, at least for most
experimental research, to a
broader, relevant population.
16. There are many types of validity;
Content validity
Face validity
Construct validity
criterion-related validity
predictive validity.
17. CONTENT VALIDITY
Content validity refers to the representativeness of our
measurement regarding the phenomenon about which
we want information.
If our test consists only of sentences which have some of
the examples of relative clauses.
Thus, our testing instrument is not sensitive to the full
range of relative clause types, and we can say that it
lacks content validity.
18. FACE VALIDITY
Face validity is closely related to the
notion of content validity and refers
to the familiarity of our instrument
and how easy it is to convince others
that there is content validity to it.
Learners are presented with
reasoning tasks to carry out in an
experiment and are already familiar
with these sorts of tasks because
they have carried them out in their
classrooms, we can say that the task
has face validity for the learners.
19. Construct Validity
This is perhaps the most complex of the validity types discussed so far.
Construct validity is an essential topic in second language acquisition research
precisely because many of the variables investigated are not easily or directly
defined.
In second language research, variables such as language proficiency, aptitude,
exposure to input, and linguistic representations are of interest.
20. However, these constructs are not directly measurable in
the way that height, weight, or age are.
In research, construct validity refers to the degree to which
the research adequately captures the construct of interest.
Construct validity can be enhanced when multiple
estimates of a construct are used.
21. Criterion-
Related
Validity
Criterion-related validity refers to the extent to
which tests used in a research study are
comparable to other well-established tests of
the construct in question.
One could measure the performance of a group
of students on the local test and a well-
established test there should be a good
correlation, one can then say that the local test
has been demonstrated to have criterion-related
validity.
22. Predictive
Validity
Predictive validity deals with the use
that one might eventually want to make
of a particular measure.
Considering the earlier example of a
local language test, if the test predicts
performance on some other dimension
(class grades), the test can be said to
have predictive validity.
23. Internal
Validity
Internal validity refers to the extent to
which the results of a study are a function
of the factor that the researcher intends
A researcher must control for (i.e., rule
out) all other possible factors that could
potentially account for the results.
It is important to think through a design
carefully to eliminate or at least minimize
threats to internal validity.
24. • There are many ways that internal validity
can be compromised, some of the most
common and important of which include
participant characteristics, participant
mortality (dropout rate), participant
inattention and attitude, participant
maturation, data collection (location and
collector), and instrumentation and test
effects.
27. Data
Collection:
Location and
Collector
Not all research studies
will be affected by the
location of data collection,
but some might.
Two groups given the
same test might influence
the results if one group is
in a noisy or
uncomfortable setting and
the other is not.
28. Another factor in some
types of research relates to
the person doing the data
collection.
one could imagine different
results depending on
whether or not the
interviewer is a member of
the native culture or speaks
the native language.
29. Instrumentation
and Test Effects
• In this section we discuss three
factors that may affect internal
validity: equivalence between pre-
and posttests, giving the goal of
the study away, and test
instructions and questions.
31. External
Validity
• With external validity, we are
concerned with the generalizability of
our findings, or in other words, the
extent to which the findings of the
study are relevant not only to the
research population, but also to the
wider population of language learners.
• It is important to remember that a
prerequisite of external validity is
internal validity
32. Sampling
Random Sampling. Random sampling
refers to the selection of participants
from the general population that the
sample will represent.
There are two common types of
random sampling: simple random (e.g.,
putting all names in a hat and drawing
from that pool) and stratified random
sampling (e.g., random sampling based
on categories).
33. Simple random sampling is
generally believed to be the best
way to obtain a sample that is
representative of the population,
especially as the sample size gets
larger.
The key to simple random
sampling is ensuring that each
and every member of a
population has an equal and
independent chance of being
selected for the research
34. STRATIFIED RANDOM
SAPMLING
Stratified random sampling
provides precision in terms of the
representativeness of the sample
and allows preselected
characteristics to be used as
variables.
In stratified random sampling, the
proportions of the subgroups in the
population are first determined, and
then participants are randomly
selected from within each stratum
according to the established
proportions.
36. Systematic sampling is the choice of every
nth individual in a population list (where the
list should not be ordered systematically)
Convenience sampling is the selection of
individuals who happen to be available for
study.
In a purposive sample, researchers knowingly
select individuals based on their knowledge
of the population and in order to elicit data in
which they are interested.
37. Representativeness
and Generalizability
• If researchers want the
results of a particular study
to be generalizable, it is
incumbent upon them to
make an argument about
the representativeness of
the sample.
38. Fraenkel and Wallen (2003) provided the following minimum sample
numbers as a guideline:
100 for descriptive studies
50 for correlational studies
15 to 30 per group in experimental studies
depending on how tightly controlled they are.
39. • In second language studies, small groups are
sometimes appropriate as long as the techniques
for analysis take the numbers into account.
40. Collecting
Biodata
Information
• When reporting research, it is important to
include sufficient information to allow the
reader to determine the extent to which the
results of your study are indeed
generalizable to a new context. For this
reason, the collection of biodata information
is an integral part of one's database
41. • It is recommended that the researcher include
enough information for the study to be replicable
(American Psychological Association, 2001) and
for our purposes in this chapter enough
information for readers to determine
generalizability.
• The first is the privacy and anonymity of the
participants; the second is the need to report
sufficient data about the participants to allow
future researchers to both evaluate and replicate
the study.
42. • The Publication Manual of
the American Psychological
Association also suggested
that in reporting
information about
participants, selection and
assignment to treatment
groups also be included.
44. Rater Reliability
The main defining characteristic of rater
reliability is that scores by two or more
raters or between one rater at Time X and
that same rater at Time Y are consistent.
In many instances, test scores are objective
and there is little judgment involved.
In many instances, test scores are objective
and there is little judgment involved.
45. We want to make sure that our definition of LREs (or whatever
construct we are dealing with) is sufficiently specific to allow any
researcher to identify them as such.
Interrater reliability begins with a well-defined construct.
It is a measure of whether two or more raters judge the same set
of data in the same way.
46. Instrument
Reliability
Not only do we have to make sure that
our raters are judging what they
believe they are judging in a consistent
manner, we also need to ensure that
our instrument is reliable.
In this section, we consider three types
of reliability testing: test-retest,
equivalence of forms of a test (e.g.,
pretest and posttest), and internal
consistency.
47. Test-Retest
In a test-retest method of determining
reliability, the same test is given to the
same group of individuals at two points in
time. One must carefully determine the
appropriate time interval between test
administrations.
In order to arrive at a score by which
reliability can be established, one
determines the correlation coefficient5
between the two test administrations.
48. Equivalence
of Forms
There are times when it is
necessary to determine the
equivalence of two tests, as, for
example, in a pretest and a
posttest.
In this method of determining
reliability, two versions of a test
are administered to the same
individuals and a correlation
coefficient is calculated.
49. Internal
Consistency
It is not always possible or feasible
to administer tests twice to the
same group of individuals (whether
the same test or two different
versions).
Nonetheless, when that is the case,
there are statistical methods to
determine reliability; split-half,
Kuder-Richardson 20 and 21, and
Cronbach's a are common ones. We
provide a brief description of each.
50. Split-half procedure is determined by obtaining a
correlation coefficient by comparing the performance on
half of a test with performance on the other half.
A statistical adjustment (Spearman-Brown prophecy
formula) is generally made to determine the reliability of
the test as a whole. If the correlation coefficient is high, it
suggests that there is internal consistency to the test.
51. Kuder-Richardson 20 and 21 are two approaches that are also used.
Although Kuder-Richardson 21 requires equal difficulty of the test
items, Kuder-Richardson 20 does not.
Both are calculated using information consisting of the number of
items, the mean, and the standard deviation. These are best used with
large numbers of items.
52. Cronbach's a is similar to the Kuder-
Richardson 20, but is used when the number
of possible answers is more than two.
Unlike Kuder-Richardson, Cronbach's a can
be applied to ordinal data.