RESEARCH INSTRUMENT,
VALIDITY AND
RELIABILITY
Research Instruments
• They are basic tools researchers used to gather data for
specific research problems.
• Common instruments are performance tests,
questionnaires, interviews, and observation checklist.
• The first two instruments are usually used in quantitative
research, while the last two instruments are often in
qualitative research.
Characteristics of a Good
Research Instrument
• Concise.
• Have you tried answering a very long test, and because of
its length, you just pick the answer without even reading it?
• A good research instrument is concise in length yet can
elicit the needed data.
• Sequential.
• Questions or items must be arranged well. It is
recommended to arrange it from simplest to the most
complex. In this way, the instrument will be more favorable
to the respondents to answer.
Characteristics of a Good
Research Instrument
• Valid and reliable.
• The instrument should pass the tests of validity and reliability to
get more appropriate and accurate information.
• Easily tabulated.
• Since you will be constructing an instrument for quantitative
research, this factor should be considered. Hence, before crafting
the instruments, the researcher makes sure that the variable and
research questions are established. These will be an important
basis for making items in the research instruments.
Ways in Developing Research
Instrument
• There are three ways you can consider in developing
the research instrument for your study.
• First is adopting an instrument from the already utilized
instruments from previous related studies.
• The second way is modifying an existing instrument when the
available instruments do not yield the exact data that will answer
the research problem.
• And the third way is when the researcher made his own
instrument that corresponds to the variable and scope of his
current study.
Common Scales Used in
Quantitative Research
• Likert Scale. This is the most common scale used in
quantitative research. Respondents were asked to rate or
rank statements according to the scale provided.
• Example: A Likert scale that measures the attitude of
students towards distance learning.
Common Scales Used in
Quantitative Research
• Semantic Differential. In this scale, a series of bipolar adjectives
will be rated by the respondents. This scale seems to be more
advantageous since it is more flexible and easier to construct.
• Example: On a description of an active student in school
activities.
Pleasant 5 4 3 2 1 Unpleasant
Enthusiastic 5 4 3 2 1 Not Enthusiastic
Competent 5 4 3 2 1 Incompetent
• Another important consideration in constructing a research instrument is
how to establish its validity and reliability.
VALIDITY
• A research instrument is considered valid if it measures
what it supposed to measure.
• When measuring oral communication proficiency level of
students, speech performance using rubric or rating scale is
more valid than students are given multiple choice tests.
• Validity also has several types: face, content, construct,
concurrent, and predictive validity.
Types of Validity of Instrument
1. Face Validity. It is also known as “logical validity.” It
calls for an initiative judgment of the instruments as it
“appear.” Just by looking at the instrument, the researcher
decides if it is valid.
2. Content Validity. An instrument that is judged with
content validity meets the objectives of the study. It is
done by checking the statements or questions if this
elicits the needed information. Experts in the field of
interest can also provide specific elements that should be
measured by the instrument.
Types of Validity of Instrument
3. Construct Validity. It refers to the validity of instruments
as it corresponds to the theoretical construct of the study. It
is concerning if a specific measure relates to other measures.
4. Concurrent Validity. When the instrument can predict
results similar to those similar tests already validated, it has
concurrent validity.
5. Predictive Validity. When the instrument is able to
produce results similar to those similar tests that will be
employed in the future, it has predictive validity. This is
particularly useful for the aptitude test.
RELIABILITY
• RELIABILITY refers to the consistency of the measures or results of the
instrument.
a. Test-retest Reliability. It is achieved by giving the same test to the same
group of respondents twice. The consistency of the two scores will be checked.
b. Equivalent Forms Reliability. It is established by administering two
identical tests except for wordings to the same group of respondents.
c. Internal Consistency Reliability. It determines how well the items measure
the same construct. It is reasonable that when a respondent gets a high score
in one item, he will also get one in similar items. There are three ways to
measure the internal consistency; through the split-half coefficient, Cronbach’s
alpha, and Kuder-Richardson formula.
Key Features of Classic
Experimental Design
•Three Main Features:
• Independent and Dependent Variables
• Experimental and Control Groups
• Pre-Testing and Post-Testing (DeCarlo, 2018)
Independent and Dependent Variables
(IV & DV)
• Independent Variable (IV):
• Manipulated by the researcher (treatment/intervention)
• Varies by field:
• Education: Teaching strategy
• Psychology: Counseling method
• Medicine: Vaccine
• Business: Marketing strategy
• Agriculture: Seedlings
• Dependent Variable (DV):
• Measured outcome based on the intervention's effect on the IV
Experimental, Control, and Comparison
Groups
• Experimental Group: Receives the
intervention
• Control Group: No intervention, for
baseline comparison
• Comparison Group: Exposed to the
current standard practice
Example:
Experimental group: New teaching
method + test
Comparison group: Traditional
teaching + test
Control group: No lesson + test
• Pre-Testing and Post-
Testing
• Pre-Test: Given before
intervention to establish
baseline
• Post-Test: Given after
intervention to assess
impact
• Purpose: Comparison
of results to evaluate
intervention efficacy
Different Types Of Experimental
Design
• Pre-Experimental Research Designs
• Pre-Experimental Design includes the basic steps in experimental research
except it does not have an equivalent control group to compare the results
with.
One Shot Case Study X 2
𝑂
One Group Pre-test Post-test Study 𝑂1 X 𝑂2
Static Group Comparison Study X 2
𝑂
X 2
𝑂
Key:
X = intervention
𝑂1 = pre-test
𝑂2 = post-test
Quasi-Experimental Research Designs
• Quasi-Experimental
Design has a non-
equivalent control
group to compare
with, but it still does
not have the
randomization of
participants.
Pre-test Post-test Non-Equivalent
Groups
𝑂1 X 𝑂2
𝑂1 X 𝑂2
Time-Series Designs
𝑂1 𝑂1 X 𝑂2 𝑂2
Non-Equivalent Before-After Design
𝑂1 𝑂1 X 𝑂2 𝑂2
𝑂1 𝑂1 X 𝑂2 𝑂2
Key:
X = intervention
𝑂1 = pre-test
𝑂2 = post-test
True Experimental Research Designs
• True Experimental Design
employs equivalent control
group to compare the results
of the study with, and
participants are randomly
assigned to each group.
Post-test Equivalent Groups
𝑅 X 𝑂2
𝑅 X 𝑂2
Pre-test Post-test Equivalent Groups
𝑅 𝑂1 X 𝑂2
𝑅 𝑂1 X 𝑂2
Solomon Four-Group Design
𝑅 𝑂1 X 𝑂2
𝑅 𝑂1 𝑂2
𝑅 X 𝑂2
𝑅 𝑂2
Key:
R = randomization 𝑂1 = pre-test
X = intervention 𝑂2 = post-test
PLACEBO EFFECT
• A placebo is a simulated treatment that do not contain the
active ingredients that the experimental group is receiving,
and the placebo effect is the positive effect of such
intervention (Price et al., 2015). In other words, they are
not really receiving any kinds of treatments.
• These positive effects can be attributed to the thinking of
the placebo group that they will get better. The placebo
effect reduces their anxiety, stress, and depression and
can change their perception and even improve the
functioning of their immune system (Price et al., 2008, as
cited in Price et al., 2015).
Characteristics of Good Quantitative
Research
1.Reliability: Consistency of measurements or
observations
2.Validity: Measurement represents what it
intends to measure
3.Replicability: Study can be repeated for
verification
4.Generalizability: Applicability of findings to
the broader population
Describing the Research
Intervention
•Variation in Independent Variable:
• Wide: Extent of exposure (e.g., single vs.
multiple counseling sessions)
• Describing the Research Intervention
Level of measurement (e.g., categorical or
quantitative)
Assignment to Groups:
• Randomized Design: Equal chance of assignment
• Randomized Block Design: Grouping by
characteristics (e.g., age, gender) before
randomization
• Independent vs. Repeated Measure Design:
• Independent: One level of intervention
• Repeated: Multiple levels, but may introduce
carryover effects (practice, fatigue, context effects)
Controlling Confounding Variables
•Methods:
• Restriction: Only participants with similar
confounding values
• Matching: Pairing experimental and control
participants with similar values
• Statistical Control: Setting confounders in the
regression model
• Randomization: Balances confounders across
large samples
Adhering to Research Ethics
•Informed Consent:
• Explains study purpose, participant role,
benefits, and potential risks
• Ensures participant confidentiality and the
right to withdraw

RESEARCH INSTRUMENT, VALIDITY AND RELIABILITY.pptx

  • 1.
  • 2.
    Research Instruments • Theyare basic tools researchers used to gather data for specific research problems. • Common instruments are performance tests, questionnaires, interviews, and observation checklist. • The first two instruments are usually used in quantitative research, while the last two instruments are often in qualitative research.
  • 3.
    Characteristics of aGood Research Instrument • Concise. • Have you tried answering a very long test, and because of its length, you just pick the answer without even reading it? • A good research instrument is concise in length yet can elicit the needed data. • Sequential. • Questions or items must be arranged well. It is recommended to arrange it from simplest to the most complex. In this way, the instrument will be more favorable to the respondents to answer.
  • 4.
    Characteristics of aGood Research Instrument • Valid and reliable. • The instrument should pass the tests of validity and reliability to get more appropriate and accurate information. • Easily tabulated. • Since you will be constructing an instrument for quantitative research, this factor should be considered. Hence, before crafting the instruments, the researcher makes sure that the variable and research questions are established. These will be an important basis for making items in the research instruments.
  • 5.
    Ways in DevelopingResearch Instrument • There are three ways you can consider in developing the research instrument for your study. • First is adopting an instrument from the already utilized instruments from previous related studies. • The second way is modifying an existing instrument when the available instruments do not yield the exact data that will answer the research problem. • And the third way is when the researcher made his own instrument that corresponds to the variable and scope of his current study.
  • 6.
    Common Scales Usedin Quantitative Research • Likert Scale. This is the most common scale used in quantitative research. Respondents were asked to rate or rank statements according to the scale provided. • Example: A Likert scale that measures the attitude of students towards distance learning.
  • 7.
    Common Scales Usedin Quantitative Research • Semantic Differential. In this scale, a series of bipolar adjectives will be rated by the respondents. This scale seems to be more advantageous since it is more flexible and easier to construct. • Example: On a description of an active student in school activities. Pleasant 5 4 3 2 1 Unpleasant Enthusiastic 5 4 3 2 1 Not Enthusiastic Competent 5 4 3 2 1 Incompetent • Another important consideration in constructing a research instrument is how to establish its validity and reliability.
  • 8.
    VALIDITY • A researchinstrument is considered valid if it measures what it supposed to measure. • When measuring oral communication proficiency level of students, speech performance using rubric or rating scale is more valid than students are given multiple choice tests. • Validity also has several types: face, content, construct, concurrent, and predictive validity.
  • 9.
    Types of Validityof Instrument 1. Face Validity. It is also known as “logical validity.” It calls for an initiative judgment of the instruments as it “appear.” Just by looking at the instrument, the researcher decides if it is valid. 2. Content Validity. An instrument that is judged with content validity meets the objectives of the study. It is done by checking the statements or questions if this elicits the needed information. Experts in the field of interest can also provide specific elements that should be measured by the instrument.
  • 10.
    Types of Validityof Instrument 3. Construct Validity. It refers to the validity of instruments as it corresponds to the theoretical construct of the study. It is concerning if a specific measure relates to other measures. 4. Concurrent Validity. When the instrument can predict results similar to those similar tests already validated, it has concurrent validity. 5. Predictive Validity. When the instrument is able to produce results similar to those similar tests that will be employed in the future, it has predictive validity. This is particularly useful for the aptitude test.
  • 11.
    RELIABILITY • RELIABILITY refersto the consistency of the measures or results of the instrument. a. Test-retest Reliability. It is achieved by giving the same test to the same group of respondents twice. The consistency of the two scores will be checked. b. Equivalent Forms Reliability. It is established by administering two identical tests except for wordings to the same group of respondents. c. Internal Consistency Reliability. It determines how well the items measure the same construct. It is reasonable that when a respondent gets a high score in one item, he will also get one in similar items. There are three ways to measure the internal consistency; through the split-half coefficient, Cronbach’s alpha, and Kuder-Richardson formula.
  • 12.
    Key Features ofClassic Experimental Design •Three Main Features: • Independent and Dependent Variables • Experimental and Control Groups • Pre-Testing and Post-Testing (DeCarlo, 2018)
  • 13.
    Independent and DependentVariables (IV & DV) • Independent Variable (IV): • Manipulated by the researcher (treatment/intervention) • Varies by field: • Education: Teaching strategy • Psychology: Counseling method • Medicine: Vaccine • Business: Marketing strategy • Agriculture: Seedlings • Dependent Variable (DV): • Measured outcome based on the intervention's effect on the IV
  • 14.
    Experimental, Control, andComparison Groups • Experimental Group: Receives the intervention • Control Group: No intervention, for baseline comparison • Comparison Group: Exposed to the current standard practice Example: Experimental group: New teaching method + test Comparison group: Traditional teaching + test Control group: No lesson + test • Pre-Testing and Post- Testing • Pre-Test: Given before intervention to establish baseline • Post-Test: Given after intervention to assess impact • Purpose: Comparison of results to evaluate intervention efficacy
  • 15.
    Different Types OfExperimental Design • Pre-Experimental Research Designs • Pre-Experimental Design includes the basic steps in experimental research except it does not have an equivalent control group to compare the results with. One Shot Case Study X 2 𝑂 One Group Pre-test Post-test Study 𝑂1 X 𝑂2 Static Group Comparison Study X 2 𝑂 X 2 𝑂 Key: X = intervention 𝑂1 = pre-test 𝑂2 = post-test
  • 16.
    Quasi-Experimental Research Designs •Quasi-Experimental Design has a non- equivalent control group to compare with, but it still does not have the randomization of participants. Pre-test Post-test Non-Equivalent Groups 𝑂1 X 𝑂2 𝑂1 X 𝑂2 Time-Series Designs 𝑂1 𝑂1 X 𝑂2 𝑂2 Non-Equivalent Before-After Design 𝑂1 𝑂1 X 𝑂2 𝑂2 𝑂1 𝑂1 X 𝑂2 𝑂2 Key: X = intervention 𝑂1 = pre-test 𝑂2 = post-test
  • 17.
    True Experimental ResearchDesigns • True Experimental Design employs equivalent control group to compare the results of the study with, and participants are randomly assigned to each group. Post-test Equivalent Groups 𝑅 X 𝑂2 𝑅 X 𝑂2 Pre-test Post-test Equivalent Groups 𝑅 𝑂1 X 𝑂2 𝑅 𝑂1 X 𝑂2 Solomon Four-Group Design 𝑅 𝑂1 X 𝑂2 𝑅 𝑂1 𝑂2 𝑅 X 𝑂2 𝑅 𝑂2 Key: R = randomization 𝑂1 = pre-test X = intervention 𝑂2 = post-test
  • 18.
    PLACEBO EFFECT • Aplacebo is a simulated treatment that do not contain the active ingredients that the experimental group is receiving, and the placebo effect is the positive effect of such intervention (Price et al., 2015). In other words, they are not really receiving any kinds of treatments. • These positive effects can be attributed to the thinking of the placebo group that they will get better. The placebo effect reduces their anxiety, stress, and depression and can change their perception and even improve the functioning of their immune system (Price et al., 2008, as cited in Price et al., 2015).
  • 19.
    Characteristics of GoodQuantitative Research 1.Reliability: Consistency of measurements or observations 2.Validity: Measurement represents what it intends to measure 3.Replicability: Study can be repeated for verification 4.Generalizability: Applicability of findings to the broader population
  • 20.
    Describing the Research Intervention •Variationin Independent Variable: • Wide: Extent of exposure (e.g., single vs. multiple counseling sessions) • Describing the Research Intervention Level of measurement (e.g., categorical or quantitative)
  • 21.
    Assignment to Groups: •Randomized Design: Equal chance of assignment • Randomized Block Design: Grouping by characteristics (e.g., age, gender) before randomization • Independent vs. Repeated Measure Design: • Independent: One level of intervention • Repeated: Multiple levels, but may introduce carryover effects (practice, fatigue, context effects)
  • 22.
    Controlling Confounding Variables •Methods: •Restriction: Only participants with similar confounding values • Matching: Pairing experimental and control participants with similar values • Statistical Control: Setting confounders in the regression model • Randomization: Balances confounders across large samples
  • 23.
    Adhering to ResearchEthics •Informed Consent: • Explains study purpose, participant role, benefits, and potential risks • Ensures participant confidentiality and the right to withdraw

Editor's Notes

  • #1 Quantitative Research Instrument What do you think will happen if tools for building a house is not prepared meticulously? The same thing when getting information for answers to a research problem, tools, or instruments should be prepared carefully. In constructing a quantitative research instrument, it is very important to remember that the tools created should require responses or data that will be numerically analyzed.
  • #2 However, interviews and observation checklists can still be used in quantitative research once the information gathered is translated into numerical data.
  • #3 In constructing the research instrument of the study, there are many factors to be considered. The type of instrument, reasons for choosing the type, and the description and conceptual definition of its parts are some of the factors that need to be decided before constructing a research instrument. Furthermore, it is also very important to understand the concepts of scales of research instruments and how to establish validity and reliability of instruments.
  • #12 In an experimental research, the researcher manipulates the Independent Variable (IV) and measure its effect on the Dependent Variable (DV). This IV is also known as the treatment or intervention, the variable you are studying. These interventions vary depending on the field of study. In the field of education, it can be a teaching strategy; in psychology, it can be a different form of counseling; in medicine, it can be the newly formulated vaccine; in manufacturing, it can be a new process; in business, it can be a marketing strategy; and in agriculture, it can be a new type of seedlings to grow. The effect of these interventions can be tested by comparing two groups: the experimental group, also known as the treatment group, which is exposed to the intervention and the group that was not exposed to the intervention, the control group. There are also cases that the researcher adds another group called the comparison group. This group will not receive the intervention that is being studied, instead they will be exposed to what is the current practice in the field.
  • #18 The control group is often called the placebo group in health research. For example, a researcher wants to know if the effects of caffeine on heart rate 15 minutes after drinking coffee (Rutberg & Bouikidis, 2018). In this experimental set-up, the experimental group will drink a caffeinated coffee while the placebo group will drink a decaffeinated coffee, while their heart rates will be measured before drinking the coffee and 15 minutes after drinking the coffee.
  • #20 To apply these characteristics in your experimental research, you need to do the following steps in describing your intervention (Bevans, 2020): Describe how widely and finely the independent variable may vary You can describe how wide the variation of your independent variable by establishing how mild or extreme their exposure to the intervention. For example, in psychology, will the participants be exposed to counselling only once or will this be a series of sessions before measuring the results? In manufacturing, will the performance of the new machine be measured within a single shift, a 24-hour shift, or for the whole week? And in science research, will the product be exposed to extreme heat or pressure or just a little over the standard values? While describing how fine the variation of your independent variable means identifying the level of measurement you will use, is it categorical or quantitative variable. For example, in education, will you just measure the academic performance of the students by just pass or fail, or With Highest Honors, With High Honors, With Honors, Non-Honors, or by their General Weighted Average?