Internal Validity
and
We will be discussing…
 INTERNAL VALIDITY
 THREATS TO INTERNAL
VALIDITY
 STRATEGIES TO ENHANCE
INTERNAL VALIDITY
INTERNAL VALIDITY
 Internal validity refers to degree of confidence that results happened
due to treatment only and not influenced by other factors.
 Validity in research means to what extent the experiment delivers the
true results.
 High Internal validity means that the study’s findings are trustworthy.
 In simpler terms: Did the experiment really test what it was supposed to
test?
INTERNAL VALIDITY
 Example:
 Suppose you’re testing whether a new teaching method improves
students’ test scores. If the students’ scores improve, you need to be
sure it’s because of the teaching method and not because:
 They already knew the material.
 They got extra help outside of class.
 The test was easier than usual.
THREATS TO INTERNAL
VALIDITY
 HISTORICAL EVENTS
 MATURATION
 TESTING
 INSTRUMENTATION
 SELECTION BIAS
 ATTRITION
 STATISTICAL REGRESSION
History
 Something happens during the experiment that affects the outcome but isn’t related to
your treatment.
 Example:
 You’re testing a diet plan, and halfway through, there’s a health campaign on TV about
eating healthy. People might change their eating habits because of the campaign, not
your diet plan.
Maturation
 Changes happen naturally over time, not because of your experiment.
 Example:
 You’re studying how a reading program helps kids improve. But kids naturally get better
at reading as they grow older, so it might not be the program causing the improvement.
Testing
 The act of taking a test more than once can change how people perform.
 Example:
 You give a pre-test, then teach something, and then a post-test. If students do better on
the post-test, it might just be because they got familiar with the test format, not your
teaching.
Instrumentation
 Changes in how measurements are taken affect results.
 Example:
 You use two different tools to measure stress levels in an experiment, and they give
slightly different results. This inconsistency can mess up your findings.
 Selection Bias
 The groups in your experiment are not equal to begin with.
 Example:
 You’re comparing two classes to see if a teaching method works, but one class
already has higher-performing students. That difference can affect your results.
 Attrition (or Dropout)
 Participants drop out of the study, and it affects the results.
 Example:
 In a weight-loss program, people who don’t see results quit, leaving only the
successful cases. It looks like the program worked, but the data is biased.
Statistical regression
 Extreme results naturally move closer to the average over time.
 Example:
 You pick students with very low test scores to test a program. Even without the
program, their scores might improve because they were unusually low to start
with.
STRATEGIES TO ENHANCE
INTERNAL VALIDITY
 RANDOM ASSIGNMENT
 Using random assignment to distribute participants across groups helps
ensure that the group are comparable.
 CONTROL GROUP
 Including a control group that does not receive the experimental
treatment can help isolate the effect of other factors.
 BLINDING
 Keeping participants and researchers unaware of group assignment
can help reduce bias.
 CONSISTENT PROCEDURES
 Standardizing the way the experiment is conducted for all
participants can minimize variability due to Procedural differences.
 PRE TEST AND POST TEST DESIGNS
 Using pre test and post test can help researchers to measure changes
more accurately.
References
 Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-
experimental designs for research. Houghton Mifflin.
 Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative,
quantitative, and mixed methods approaches (5th ed.). Sage.

Huma Haroon 1.pptx research methodology presentation

  • 1.
  • 2.
    We will bediscussing…  INTERNAL VALIDITY  THREATS TO INTERNAL VALIDITY  STRATEGIES TO ENHANCE INTERNAL VALIDITY
  • 3.
    INTERNAL VALIDITY  Internalvalidity refers to degree of confidence that results happened due to treatment only and not influenced by other factors.  Validity in research means to what extent the experiment delivers the true results.  High Internal validity means that the study’s findings are trustworthy.  In simpler terms: Did the experiment really test what it was supposed to test?
  • 4.
    INTERNAL VALIDITY  Example: Suppose you’re testing whether a new teaching method improves students’ test scores. If the students’ scores improve, you need to be sure it’s because of the teaching method and not because:  They already knew the material.  They got extra help outside of class.  The test was easier than usual.
  • 5.
    THREATS TO INTERNAL VALIDITY HISTORICAL EVENTS  MATURATION  TESTING  INSTRUMENTATION  SELECTION BIAS  ATTRITION  STATISTICAL REGRESSION
  • 6.
    History  Something happensduring the experiment that affects the outcome but isn’t related to your treatment.  Example:  You’re testing a diet plan, and halfway through, there’s a health campaign on TV about eating healthy. People might change their eating habits because of the campaign, not your diet plan. Maturation  Changes happen naturally over time, not because of your experiment.  Example:  You’re studying how a reading program helps kids improve. But kids naturally get better at reading as they grow older, so it might not be the program causing the improvement.
  • 7.
    Testing  The actof taking a test more than once can change how people perform.  Example:  You give a pre-test, then teach something, and then a post-test. If students do better on the post-test, it might just be because they got familiar with the test format, not your teaching. Instrumentation  Changes in how measurements are taken affect results.  Example:  You use two different tools to measure stress levels in an experiment, and they give slightly different results. This inconsistency can mess up your findings.
  • 8.
     Selection Bias The groups in your experiment are not equal to begin with.  Example:  You’re comparing two classes to see if a teaching method works, but one class already has higher-performing students. That difference can affect your results.  Attrition (or Dropout)  Participants drop out of the study, and it affects the results.  Example:  In a weight-loss program, people who don’t see results quit, leaving only the successful cases. It looks like the program worked, but the data is biased.
  • 9.
    Statistical regression  Extremeresults naturally move closer to the average over time.  Example:  You pick students with very low test scores to test a program. Even without the program, their scores might improve because they were unusually low to start with.
  • 10.
    STRATEGIES TO ENHANCE INTERNALVALIDITY  RANDOM ASSIGNMENT  Using random assignment to distribute participants across groups helps ensure that the group are comparable.  CONTROL GROUP  Including a control group that does not receive the experimental treatment can help isolate the effect of other factors.
  • 11.
     BLINDING  Keepingparticipants and researchers unaware of group assignment can help reduce bias.  CONSISTENT PROCEDURES  Standardizing the way the experiment is conducted for all participants can minimize variability due to Procedural differences.  PRE TEST AND POST TEST DESIGNS  Using pre test and post test can help researchers to measure changes more accurately.
  • 12.
    References  Campbell, D.T., & Stanley, J. C. (1963). Experimental and quasi- experimental designs for research. Houghton Mifflin.  Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). Sage.