Experimental Research
Experiments
 Begin with a Hypothesis
 Modify Something in a Situation
 Compare Outcomes
 Cases or People are Termed “Subjects”
Random Assignment
 Probability of Equal Selection
 Allows Accurate Prediction
 An Alternative to Random
Assignment is Matching
Parts of the Classic Experiment
 Treatment or Independent Variable
 Dependent Variable
 Pretest
 Posttest
 Experimental Group
 Control Group
 Random Assignment
Variations on Experimental Design
 Pre-experimental Design
 One-shot Case Study
 One-group Pretest-Posttest Design
 Static Group Comparison
 Quasi-Experimental and Special Designs
Types of Validity
 External Validity
 Do the results apply to the broader
population?
 Internal Validity
 Is the independent variable responsible for the
observed changes in the dependent variable?
Confounding Variables That Threaten
Internal Validity
 Maturation
 Changes due to normal growth or
predictable changes
 History
 Changes due to an event that occurs during
the study, which might have affects the
results
Confounding Variables That Threaten
Internal Validity
 Instrumentation
 Any change in the calibration of the measuring
instrument over the course of the study
 Regression to the Mean
 Tendency for participants selected because of extreme
scores to be less extreme on a retest
 Selection
 Any factor that creates groups that are not equal at the
start of the study
Confounding Variables That Threaten
Internal Validity
 Attrition

Loss of participants during a study; are the
participants who drop out different from those
who continue?
 Diffusion of treatment
 Changes in participants” behavior in one
condition because of information they
obtained about the procedures in other
conditions
Subject Effects
 Participants are not passive
 They try to understand the study to help them to
know what they “should do”
 This behavior termed “subject effects”
 Participants respond to subtle cues about what is
expected (termed “demand characteristics”)
 Placebo effect: treatment effect that is due
to expectations that the treatment will work
Experimenter Effects
 Any preconceived idea of the
researcher about how the experiment
should turn out
 Compensatory effects
Types of Control Procedures
 General control procedures (applicable to
virtually all research)
 Control over subject and experimenter effects
 Control through the selection and assignment
of participants
 Control through specific experimental design
Principles of Experimental Design
 Control the effects of lurking
variables on the response, most
simply by comparing two or more
treatments
 Randomize
 Replicate
Randomization
 The use of chance to divide experimental
units into groups is called randomization.
 Comparison of effects of several treatments
is valid only when all treatments are applied
to similar groups of experimental units.
How to randomize?
How to randomize?
 Flip a coin or draw numbers out of a hat
 Use a random number table
 Use a statistical software package or
program
 Minitab
 www.whfreeman.com/ips
Statistical Significance
 An observed effect so large that it
would rarely occur by chance is called
statistically significant.
A few more things…
A few more things…
 Double-blind: neither the subjects nor the
person administering the treatment knew
which treatment any subject had received
 Lack of realism is a major weakness of
experiments. Is it possible to duplicate the
conditions that we want?

experimental types of research in mentioned in research methodology.ppt

  • 1.
  • 2.
    Experiments  Begin witha Hypothesis  Modify Something in a Situation  Compare Outcomes  Cases or People are Termed “Subjects”
  • 3.
    Random Assignment  Probabilityof Equal Selection  Allows Accurate Prediction  An Alternative to Random Assignment is Matching
  • 4.
    Parts of theClassic Experiment  Treatment or Independent Variable  Dependent Variable  Pretest  Posttest  Experimental Group  Control Group  Random Assignment
  • 5.
    Variations on ExperimentalDesign  Pre-experimental Design  One-shot Case Study  One-group Pretest-Posttest Design  Static Group Comparison  Quasi-Experimental and Special Designs
  • 6.
    Types of Validity External Validity  Do the results apply to the broader population?  Internal Validity  Is the independent variable responsible for the observed changes in the dependent variable?
  • 7.
    Confounding Variables ThatThreaten Internal Validity  Maturation  Changes due to normal growth or predictable changes  History  Changes due to an event that occurs during the study, which might have affects the results
  • 8.
    Confounding Variables ThatThreaten Internal Validity  Instrumentation  Any change in the calibration of the measuring instrument over the course of the study  Regression to the Mean  Tendency for participants selected because of extreme scores to be less extreme on a retest  Selection  Any factor that creates groups that are not equal at the start of the study
  • 9.
    Confounding Variables ThatThreaten Internal Validity  Attrition  Loss of participants during a study; are the participants who drop out different from those who continue?  Diffusion of treatment  Changes in participants” behavior in one condition because of information they obtained about the procedures in other conditions
  • 10.
    Subject Effects  Participantsare not passive  They try to understand the study to help them to know what they “should do”  This behavior termed “subject effects”  Participants respond to subtle cues about what is expected (termed “demand characteristics”)  Placebo effect: treatment effect that is due to expectations that the treatment will work
  • 11.
    Experimenter Effects  Anypreconceived idea of the researcher about how the experiment should turn out  Compensatory effects
  • 12.
    Types of ControlProcedures  General control procedures (applicable to virtually all research)  Control over subject and experimenter effects  Control through the selection and assignment of participants  Control through specific experimental design
  • 13.
    Principles of ExperimentalDesign  Control the effects of lurking variables on the response, most simply by comparing two or more treatments  Randomize  Replicate
  • 14.
    Randomization  The useof chance to divide experimental units into groups is called randomization.  Comparison of effects of several treatments is valid only when all treatments are applied to similar groups of experimental units.
  • 15.
    How to randomize? Howto randomize?  Flip a coin or draw numbers out of a hat  Use a random number table  Use a statistical software package or program  Minitab  www.whfreeman.com/ips
  • 16.
    Statistical Significance  Anobserved effect so large that it would rarely occur by chance is called statistically significant.
  • 17.
    A few morethings… A few more things…  Double-blind: neither the subjects nor the person administering the treatment knew which treatment any subject had received  Lack of realism is a major weakness of experiments. Is it possible to duplicate the conditions that we want?