BUSINESS RESEARCH
METHODS
MEASUREMENT AND SCALING
Siva Sivani Institute of Management
BY
L.Nagarjuna Reddy
ROLL NO 26-032
PGDM A
THE CRITERIA FOR GOOD
MEASUREMENT
VALIDITY
 Research validity in surveys relates to the extent at which the
survey measures right elements that need to be measured. In
simple terms, validity refers to how well an instrument as
measures what it is intended to measure.
 Reliability alone is not enough, measures need to be reliable, as
well as, valid.
 Example: if a weight measuring scale is wrong by 4kg (it
deducts 4 kg of the actual weight), it can be specified as reliable,
because the scale displays the same weight every time we
measure a specific item. However, the scale is not valid because
it does not display the actual weight of the item.
TYPES OF VALIDITY
 1. Face Validity: ascertains that the measure appears to be
the intended construct under study. The stakeholders can easily
assess face validity. Although this is not a very “scientific” type of
validity, it may be an essential component in enlisting motivation of
stakeholders. If the stakeholders do not believe the measure is an
accurate assessment of the ability, they may become disengaged
with the task.
Example: If a measure of art appreciation is created all of the items
should be related to the different components and types of art. If the
questions are regarding historical time periods, with no reference to any
artistic movement, stakeholders may not be motivated to give their best
effort or invest in this measure because they do not believe it is a true
assessment of art appreciation.
Validity(cont)
 2. Construct Validity: It is used to ensure that the measure is
actually measure what it is intended to measure (i.e. the
construct), and not other variables. Using a panel of “experts”
familiar with the construct is a way in which this type of validity can
be assessed. The experts can examine the items and decide what
that specific item is intended to measure. Students can be
involved in this process to obtain their feedback
Example: A women’s studies program may design a cumulative
assessment of learning throughout the major. The questions are
written with complicated wording and phrasing. This can cause the
test inadvertently becoming a test of reading comprehension, rather
than a test of women’s studies. It is important that the measure is
actually assessing the intended
Validity (cont)
3. Criterion-Related Validity : It is used to predict future or
current performance - it correlates test results with another
criterion of interest.
Example: If a physics program designed a measure to assess cumulative
student learning throughout the major. The new measure could be
correlated with a standardized measure of ability in this discipline, such as
an ETS field test or the GRE subject test. The higher the correlation
between the established measure and new measure, the more faith
stakeholders can have in the new assessment too
RELIABILITY
 MEANING: A measure is said to be reliable when it elicits the
same response from the same person when the measuring
instrument is administered to that person successively in similar
or almost similar circumstances.
TYPES OF RELIABILITY
 Test-retest reliability : It is a measure of reliability obtained by
administering the same test twice over a period of time to a
group of individuals. The scores from Time 1 and Time 2 can
then be correlated in order to evaluate the test for stability over
time.
 Example: A test designed to assess student learning in psychology
could be given to a group of students twice, with the second
administration perhaps coming a week after the first. The obtained
correlation coefficient would indicate the stability of the scores.
Reliabiliy (cont)
 Equivalent forms reliability: In equivalent forms reliability ,two
equivalent forms are administrated to the subjects at two different
times. To measure the desired characteristics of interest, two
equivalent forms are constructed with different sample of items
.Both the forms contain the same type of questions and the same
structure with some specific difference.
Reliability (cont)
 Internal Consistency Reliability: Internal consistency reliability is
used to assess the reliability of a summated scale by which
several items are summed to form a total score.
 Coefficient alpha or cronbach’s alpha is actually a mean
reliability coefficient for all the different ways of splitting the
items included in the measuring instruments.
SENSITIVITY
 MEANING: Sensitivity is the ability of a measuring instrument to
measure the meaningful difference in the responses obtained
from the subjects included in the study.
 It is to be noted that the dichotomous categories of response
such as yes or no can generate a great deal or variability in the
responses.
 Example: a scale based on five categories of responses, such
as strongly disagree, disagree, neither agree nor disagree,
agree and strongly agree, presents a more sensitive measuring
instrument.
Business research methods

Business research methods

  • 1.
    BUSINESS RESEARCH METHODS MEASUREMENT ANDSCALING Siva Sivani Institute of Management BY L.Nagarjuna Reddy ROLL NO 26-032 PGDM A
  • 2.
    THE CRITERIA FORGOOD MEASUREMENT
  • 3.
    VALIDITY  Research validityin surveys relates to the extent at which the survey measures right elements that need to be measured. In simple terms, validity refers to how well an instrument as measures what it is intended to measure.  Reliability alone is not enough, measures need to be reliable, as well as, valid.  Example: if a weight measuring scale is wrong by 4kg (it deducts 4 kg of the actual weight), it can be specified as reliable, because the scale displays the same weight every time we measure a specific item. However, the scale is not valid because it does not display the actual weight of the item.
  • 4.
    TYPES OF VALIDITY 1. Face Validity: ascertains that the measure appears to be the intended construct under study. The stakeholders can easily assess face validity. Although this is not a very “scientific” type of validity, it may be an essential component in enlisting motivation of stakeholders. If the stakeholders do not believe the measure is an accurate assessment of the ability, they may become disengaged with the task. Example: If a measure of art appreciation is created all of the items should be related to the different components and types of art. If the questions are regarding historical time periods, with no reference to any artistic movement, stakeholders may not be motivated to give their best effort or invest in this measure because they do not believe it is a true assessment of art appreciation.
  • 5.
    Validity(cont)  2. ConstructValidity: It is used to ensure that the measure is actually measure what it is intended to measure (i.e. the construct), and not other variables. Using a panel of “experts” familiar with the construct is a way in which this type of validity can be assessed. The experts can examine the items and decide what that specific item is intended to measure. Students can be involved in this process to obtain their feedback Example: A women’s studies program may design a cumulative assessment of learning throughout the major. The questions are written with complicated wording and phrasing. This can cause the test inadvertently becoming a test of reading comprehension, rather than a test of women’s studies. It is important that the measure is actually assessing the intended
  • 6.
    Validity (cont) 3. Criterion-RelatedValidity : It is used to predict future or current performance - it correlates test results with another criterion of interest. Example: If a physics program designed a measure to assess cumulative student learning throughout the major. The new measure could be correlated with a standardized measure of ability in this discipline, such as an ETS field test or the GRE subject test. The higher the correlation between the established measure and new measure, the more faith stakeholders can have in the new assessment too
  • 7.
    RELIABILITY  MEANING: Ameasure is said to be reliable when it elicits the same response from the same person when the measuring instrument is administered to that person successively in similar or almost similar circumstances.
  • 8.
    TYPES OF RELIABILITY Test-retest reliability : It is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time.  Example: A test designed to assess student learning in psychology could be given to a group of students twice, with the second administration perhaps coming a week after the first. The obtained correlation coefficient would indicate the stability of the scores.
  • 9.
    Reliabiliy (cont)  Equivalentforms reliability: In equivalent forms reliability ,two equivalent forms are administrated to the subjects at two different times. To measure the desired characteristics of interest, two equivalent forms are constructed with different sample of items .Both the forms contain the same type of questions and the same structure with some specific difference.
  • 10.
    Reliability (cont)  InternalConsistency Reliability: Internal consistency reliability is used to assess the reliability of a summated scale by which several items are summed to form a total score.  Coefficient alpha or cronbach’s alpha is actually a mean reliability coefficient for all the different ways of splitting the items included in the measuring instruments.
  • 11.
    SENSITIVITY  MEANING: Sensitivityis the ability of a measuring instrument to measure the meaningful difference in the responses obtained from the subjects included in the study.  It is to be noted that the dichotomous categories of response such as yes or no can generate a great deal or variability in the responses.  Example: a scale based on five categories of responses, such as strongly disagree, disagree, neither agree nor disagree, agree and strongly agree, presents a more sensitive measuring instrument.

Editor's Notes