Educational Research:  Causal-Comparative &  Experimental Studies ELT-718 Research Methods Asst. Prof. Dr. Hasan  BEDİR
Research... The systematic application of a family of methods employed to provide trustworthy information about problems … an ongoing process based on many accumulated understandings and explanations that, when taken together lead to generalizations about problems and the development of theories
The basic steps of research... Scientific and disciplined inquiry is an orderly process, involving:    description and execution of procedures to collection information (“ method ”)    objective data  analysis    statement of findings (“ results ”)    recognition and identification of a topic to be studied (“ problem ”)
Research methods... Quantitative … … collects and analyzes numerical  data  obtained from formal  instruments
Quantitative methods... descriptive research (“survey research”) correlational research causal-comparative research  (“ ex post facto  research”) experimental research
Research Methodologies A continuum rather than “either/or” Qualitative  Goal: To Understand, Predict Descriptive accounts Similarities and Contrasts Applied and Theoretical Research Questions Field study Natural conditions Quantitative Goal: To Predict and Control Measure and Evaluate Generalize to population, reproduction Basic and Theoretical  Hypothesis testing Emprical study Controlled, contrived
Data Collection Quantitative Emphasis on numerical data, measurable variables Data is collected under controlled conditions in order to rule out the possibility that variables other than the one under study can account for the relationships identified Qualitative Emphasis on observation and interpretation. Data are collected within the context of their natural occurrence.
Causal-Comparative Research The Purpose Purpose of explaining educational phenomena through the study of cause-and-effect relationships.  The presumed cause is called the  independent variable  and the presumed effect is called the  dependent variable .  Designs where the researcher does not manipulate the independent variable are called  ex post facto research .
Causal-Comparative Research  ( Continued ) Causal-Comparative research is also a type of non-experimental investigation in which researchers seek to identify cause-effect relationships by forming groups of individuals in whom the independent variable is present or absent and than determining whether the groups differ on the dependent variable.
causal-comparative research (“ ex post facto  research”)  … at least two different groups are compared on a  dependent variable  or measure of performance (called the “effect”) because the  independent variable  (called the “cause”) has already occurred or cannot be manipulated
Research variables... Independent … … an activity of characteristic believed to make a difference with respect to some behavior … (syn.) experimental variable, cause, treatment
dependent variable … … the change or difference occurring as a result of the independent variable … (syn.) criterion variable, effect, outcome, posttest
Data analysis and interpretation… … researcher uses a variety of  descriptive  and  inferential statistics : mean standard   deviation t-test analysis of variance chi squared
mean … the descriptive statistic indicating the average performance of an individual or group on a measure of some variable
standard deviation … the descriptive statistic indicating the spread of a set of scores around the mean
t-test … the inferential statistic indicating whether the means of  two  groups are significantly different from one another
analysis of variance (“ANOVA”) … the inferential statistic indicating the presence of a significant difference among the means of  three or more groups
chi squared ( Χ 2 ) … the inferential statistic indicating that there is a greater than expected difference among group frequencies
Research Design s “ True” Experimental Design : The Researcher actually manipulates the independent variable Non-Experimental Design: Passive Observation by Researcher Quasi-Experimental Design: Assignments to experimental conditions are not random
Experimental Designs Experimental research design: The researcher has control over the experiment in terms of sample selection, treatment, environment, etc. Experimental designs are typical in psychology, medicine, education, etc.
Experimental Designs Experiments often discuss pre and post test observations POST-TEST ONLY  X  O  1 Where: 0 t  = Observation in time t of experimental group  X = Treatment  0 c  = Control group
experimental research … the researcher selects participants and divides them into two or more groups having similar characteristics and, then, applies the treatment(s) to the groups and measures the effects upon the groups
Types of experimental comparison… 1.  comparison of two different approaches ( A  versus  B ) 2.  comparison of an existing approach to a new approach ( A  and  ~  A ) 3.  comparison of differing   amounts of a single approach ( A  and  a  or  a  and  A )
where: A  – experimental (“treatment”) group B  – control (“no treatment,” “nonmanipulated”) group
Group experimental designs… 1.  single-variable 2.  factorial
types of pre-experimental designs one-shot case study X  O … a single group exposed to a treatment ( X ) and then posttested ( O )
one-group pretest-posttest design O  X  O  … a single group is pretested ( O ), exposed to a treatment ( X ) and, then, is posttested ( O )
static group comparison X 1   O X 2   O … involves at least two groups ( X ), one receiving a new, or experimental treatment ( X 1  ) and another receiving a traditional, or control treatment ( X 2  ) and, then, are posttested ( O )
“ True” experiments defined An experiment that utilizes random assignment to conditions in an effort to ensure that the participants in each condition are statistically identical.  In doing so, any differences observed in the dependent variable are attributable  only  to the presence/absence of the independent variable. Campbell & Stanley’s taxonomy RO 1   X  O 2 RO 3   O 4 where R = random assignment, O = observation, X = treatment
types of true experimental designs pretest-posttest control group design R  O  X 1   O R  O  X 2   O
… at least two groups are formed by random assignment ( R ), administered a pretest ( O ), receive different treatments ( X 1 , X 2  ), are administered a posttest, and posttest scores are compared to determine effectiveness of treatments
posttest-only control group design R  X 1   O R  X 2   O
… at least two groups are formed by random assignment ( R ), receive different treatments ( X 1 , X 2  ), are administered a posttest, and posttest scores are compared to determine effectiveness of treatments
Solomon four-group design R  O  X 1   O R  O  X 2   O R  X 1   O R  X 2   O
… four groups are formed by random assignment ( R ) of participants, two groups are pretested ( O ) and two are not, one pretested and one unpretested group receive the experimental treatments ( X 1 , X 2  ), each group is are administered a posttest on the dependent variable, and posttest scores are compared to determine effectiveness of treatments
factorial designs … involve two or more independent variables with at least one independent variable being manipulated by the researcher
Experimental Design Factorial Design Diagram Independent Variable #1:  Teaching Method Independent   Variable #2: Aptitude Randomly assigned 3 rd graders scoring below 60 on  an aptitude test. Randomly assigned 3 rd graders scoring below 60 on  an aptitude test. Randomly assigned 3 rd graders scoring about 85 on  an aptitude test. Randomly assigned 3 rd graders scoring above 85 on  an aptitude test. Reading/Lecture/Etc. Lecture Only High Low
Independent Variable #1  Teaching Method How many possible Teaching Methods are there? Which will be the methods used in the study? If more than one will be used, each method may be considered a  factor  of the variable known as Teaching Method. Teaching Method Lecture only Lecture & Small Group Discussion
Independent Variable #2  Aptitude How many possible levels of aptitude are there? How many may be represented in the group of subjects participating in the study? Once identified, levels of aptitude may be considered  factors  of the variable known as Aptitude. Low High Aptitude
Lecture only Lecture & Small Group Discussion Low High
examples of factorial designs two-by-two factorial design  (four cells) 2  X  2 … two types of factors (e.g., method of instruction) each of which has two levels (e.g., traditional vs. innovative)
A 2 X 2 factorial design… Independent Variable A B Dependent Variable O O Group #1 Group #2 Group #3 Group #4 Cells not manipulated manipulated
A 2 X 2 factorial design… A No interaction between factors B
A 2 X 2 factorial design… A No interaction between factors B
A 2 X 2 factorial design… A Interacting factors B
A 2 X 2 factorial design… A Interacting factors B
two-by-three factorial design  (six cells) 2  X  3 … two types of factors (e.g., motivation; interest) each of which has three levels (e.g., high, medium, low)
Single-subject experimental designs… 1.  A – B – A withdrawal 2.  multiple baseline designs 3.  alternating treatments designs
simple A – B design … baseline measurements ( O ) are repeatedly made until stability is established, then the treatment ( X ) is introduced and an appropriate number of measurements ( O ) are made during treatment implementation
simple A – B design O  O  O  X  O  X  O  X  O baseline  treatment   phase  phase A  |  B
A – B – A withdrawal designs … baseline measurements ( O ) are repeatedly made until stability is established, then the treatment ( X ) is introduced and an appropriate number of measurements ( O ) are made during treatment implementation, followed by an appropriate number of baseline measurements ( O ) to determine stability of treatment ( X )
A – B – A withdrawal designs O  O  O  X  O  X  O  X  O  O  O baseline  treatment  baseline   phase  phase  phase A  |  B  |   A
multiple-baseline designs … used when a return to baseline conditions is difficult or impossible since treatment effects oftentimes do not disappear when a treatment is removed
…“ multiple” refers to the study of more than one behavior, participant, or setting
… instead of collecting baseline data on one specific behavior, data are collected on: (1) several behaviors for one participant, (2) one behavior for several participants, or (3) one behavior and one participant in several settings
… then, over a period of time, the treatment is systematically applied to each behavior (or participant, or setting) one at a time until all behaviors (or participants or settings)  have been exposed to the treatment
multiple baseline design Example: one treatment for three behaviors in three settings behavior 1  O  O  OXOXOXOXOXOXOXOXOXOXOXO  setting #1 behavior 2  O  O  O  O  O  OXOXOXOXOXOXOXO  setting #2 behavior 3  O  O  O  O  O  O  O  O  OXOXOXO  setting #3 A B the baseline remains the same… … while the treatment is applied at other settings
Threats to validity… … internal : factors other than the independent variable that affect the dependent variable … external : factors that affect the generalizability of the study to groups and settings beyond those of the experiment
Threats to internal validity… 1.  history 2.  maturation 3.  testing 4.  instrumentation 5.  statistical regression 6.  differential selection of participants 7.  mortality 8.  selection-maturation interaction
history … the occurrence of events that are not part of the experimental treatment but that occur during the study and affect the dependent variable
maturation … the physical, intellectual, and emotional changes that occur naturally in a study’s participants over a period of time
testing … refers to improved scores on a posttest as a result of having taken a pretest
instrumentation … the unreliability or lack of consistency in measuring instruments that can result in an invalid assessment of performance
statistical regression … the tendency of participants who score highest on a test to score lower on a second, similar test and vice versa
differential selection of participants … the outcome when already formed groups are compared raising the possibility that the groups were different before a study even begins
mortality … the case in which participants drop out of a study which changes the characteristics of the groups and may significantly affect the study’s results
selection-maturation interaction … if already-formed groups are used in a study, one group may profit more (or less) from a treatment or have an initial advantage because of maturation, history, or testing factors
Threats to external validity… 1.  pretest-treatment interaction 2.  selection-treatment interaction 3.  multiple treatment interference 4.  specificity of variables 5.  treatment diffusion 6.  experimenter effects 7.  reactive effects
pretest-treatment interaction … the situation when participants respond or react differently to a treatment because they have been pretested
multiple-treatment interference … the situation when the same participants receive more than one treatment in succession
selection-treatment interference … the situation when participants are not randomly selected for treatments
specificity of variables … the situation when a study is conducted with (1) a specific kind of participant; (2) is based on a particular operational definition of the independent variable; (3) uses specific dependent variables; (4) transpires at a specific time; and, (5) under a specific set of circumstances
treatment diffusion … the situation when different treatment groups communicate with and learn from each other
experimenter effects … the situation when the researchers present potential threats to the external validity of their own studies
reactive arrangements … the situation when a number of factors associated with the way in which a study is conducted interacts with or shapes the feelings and attitudes of the participants involved
Types of reactive arrangements… … Hawthorne effect : any situation in which participants’ behavior is affected not by the treatment per se but by their knowledge of participating in a study … compensatory rivalry : the control group is informed that they will be the control group for a new, experimental study (“ John Henry effect ”)
… placebo effect : the situation in which half of the participants receive no treatment but believe they are … novelty effect : the situation in which  participant interest, motivation, or engagement increases simply because they are doing something different
Controlling for extraneous (confounding) variables… 1.  randomization 2.  matching 3.  comparing homogeneous groups or subgroups 4.  using participants as their own controls 5.  analysis of covariance (ANCOVA)
randomization … the process of selecting and assigning participants in such a way that all individuals in the defined population have an equal and independent chance of being selected for the sample
matching … a technique for equating groups on one or more variables, usually the ones highly related to performance on the dependent variable (e.g., pairwise matching)
comparing homogeneous groups or subgroups … a technique to control an extraneous variable by comparing groups that are similar with respect to that variable (e.g., stratified sampling)
using participants as their own controls … exposing a single group to different treatments one treatment at a time
Data analysis and interpretation… for single-subject research … a visual inspection and analysis of graphical presentations of results … focuses upon: adequacy of the design; an assessment of treatment effectiveness ( clinical  vs.  statistical significance )
Mini-Quiz… True and false… … causal-comparative studies  attempt  to identify the cause-effect relationships; correlational studies do not True
… causal-comparative studies typically involve two (or more) groups and one independent variable, whereas correlational studies typically involve two (or more) variables and one group True
… causal-comparative studies involve relation, whereas correlational  studies involve cause False
… oftentimes, causal-comparative research is undertaken because the independent variable could be manipulated but should not True
… one of the most important reasons for conducting causal-comparative research is to identify variables worthy of experimental investigation True
…“ lack of control” means that the researcher can and should  manipulate the independent variable False
… each group in a causal-comparative study represents a different population True
… the more similar two groups are on all relevant variables except the independent variable, the stronger the study is True
… there is random assignment to treatment groups from a single population in causal-comparative studies False
… lack of randomization, manipulation of the independent variable, and control are all sources of weakness in a causal-comparative design True
… matching, comparing homogenous groups or subgroups, and covariate analysis are strategies that enable researchers to overcome problems of initial group differences on an extraneous variable True
… interpretation of the findings in a causal-comparative study requires considerable caution because the cause may be the effect and the effect may be the cause True
… extraneous variables or confounding factors may be the real “cause” of both the independent and dependent variables True
Fill in the blank… … groups selected for a causal-comparative study which differ on some independent variable and comparing them on some dependent variable comparison groups
Fill in the blank… … unexplained variables that influence a dependent variable  confounding factors extraneous variables
Fill in the blank… … a method for controlling extraneous variables by comparing groups that are homogeneous with respect to the extraneous variable comparing homogeneous groups
Fill in the blank… … a method for controlling extraneous variables by forming subgroups within each group that represent all levels of the control variable comparing homogeneous subgroups
Fill in the blank… … a statistical tool to determine the effects of the independent variable and the control variable on the dependent variable, both separately and in combination factorial analysis of variance
Fill in the blank… … the descriptive statistic indicating the average performance of a group on a measure of some variable mean
Fill in the blank… … the descriptive statistic indicating how clustered or spread out around the mean a set of scores is standard deviation
Fill in the blank… … the inferential statistic determining whether there is a significant difference between the means of  two  groups t-test
Fill in the blank… … the inferential statistic determining whether there is a significant difference between the means of  three or more  groups analysis of variance
Fill in the blank… … the inferential statistic determining whether there is a greater than expected difference among group frequencies chi squared
Fill in the blank… … activities by which a researcher endeavors to ensure that the results of a causal-comparative study are not tainted by extraneous variables control

Cs comp&experiemental2011 12

  • 1.
    Educational Research: Causal-Comparative & Experimental Studies ELT-718 Research Methods Asst. Prof. Dr. Hasan BEDİR
  • 2.
    Research... The systematicapplication of a family of methods employed to provide trustworthy information about problems … an ongoing process based on many accumulated understandings and explanations that, when taken together lead to generalizations about problems and the development of theories
  • 3.
    The basic stepsof research... Scientific and disciplined inquiry is an orderly process, involving:  description and execution of procedures to collection information (“ method ”)  objective data analysis  statement of findings (“ results ”)  recognition and identification of a topic to be studied (“ problem ”)
  • 4.
    Research methods... Quantitative… … collects and analyzes numerical data obtained from formal instruments
  • 5.
    Quantitative methods... descriptiveresearch (“survey research”) correlational research causal-comparative research (“ ex post facto research”) experimental research
  • 6.
    Research Methodologies Acontinuum rather than “either/or” Qualitative Goal: To Understand, Predict Descriptive accounts Similarities and Contrasts Applied and Theoretical Research Questions Field study Natural conditions Quantitative Goal: To Predict and Control Measure and Evaluate Generalize to population, reproduction Basic and Theoretical Hypothesis testing Emprical study Controlled, contrived
  • 7.
    Data Collection QuantitativeEmphasis on numerical data, measurable variables Data is collected under controlled conditions in order to rule out the possibility that variables other than the one under study can account for the relationships identified Qualitative Emphasis on observation and interpretation. Data are collected within the context of their natural occurrence.
  • 8.
    Causal-Comparative Research ThePurpose Purpose of explaining educational phenomena through the study of cause-and-effect relationships. The presumed cause is called the independent variable and the presumed effect is called the dependent variable . Designs where the researcher does not manipulate the independent variable are called ex post facto research .
  • 9.
    Causal-Comparative Research ( Continued ) Causal-Comparative research is also a type of non-experimental investigation in which researchers seek to identify cause-effect relationships by forming groups of individuals in whom the independent variable is present or absent and than determining whether the groups differ on the dependent variable.
  • 10.
    causal-comparative research (“ex post facto research”) … at least two different groups are compared on a dependent variable or measure of performance (called the “effect”) because the independent variable (called the “cause”) has already occurred or cannot be manipulated
  • 11.
    Research variables... Independent… … an activity of characteristic believed to make a difference with respect to some behavior … (syn.) experimental variable, cause, treatment
  • 12.
    dependent variable …… the change or difference occurring as a result of the independent variable … (syn.) criterion variable, effect, outcome, posttest
  • 13.
    Data analysis andinterpretation… … researcher uses a variety of descriptive and inferential statistics : mean standard deviation t-test analysis of variance chi squared
  • 14.
    mean … thedescriptive statistic indicating the average performance of an individual or group on a measure of some variable
  • 15.
    standard deviation …the descriptive statistic indicating the spread of a set of scores around the mean
  • 16.
    t-test … theinferential statistic indicating whether the means of two groups are significantly different from one another
  • 17.
    analysis of variance(“ANOVA”) … the inferential statistic indicating the presence of a significant difference among the means of three or more groups
  • 18.
    chi squared (Χ 2 ) … the inferential statistic indicating that there is a greater than expected difference among group frequencies
  • 19.
    Research Design s“ True” Experimental Design : The Researcher actually manipulates the independent variable Non-Experimental Design: Passive Observation by Researcher Quasi-Experimental Design: Assignments to experimental conditions are not random
  • 20.
    Experimental Designs Experimentalresearch design: The researcher has control over the experiment in terms of sample selection, treatment, environment, etc. Experimental designs are typical in psychology, medicine, education, etc.
  • 21.
    Experimental Designs Experimentsoften discuss pre and post test observations POST-TEST ONLY X O 1 Where: 0 t = Observation in time t of experimental group X = Treatment 0 c = Control group
  • 22.
    experimental research …the researcher selects participants and divides them into two or more groups having similar characteristics and, then, applies the treatment(s) to the groups and measures the effects upon the groups
  • 23.
    Types of experimentalcomparison… 1. comparison of two different approaches ( A versus B ) 2. comparison of an existing approach to a new approach ( A and ~ A ) 3. comparison of differing amounts of a single approach ( A and a or a and A )
  • 24.
    where: A – experimental (“treatment”) group B – control (“no treatment,” “nonmanipulated”) group
  • 25.
    Group experimental designs…1. single-variable 2. factorial
  • 26.
    types of pre-experimentaldesigns one-shot case study X O … a single group exposed to a treatment ( X ) and then posttested ( O )
  • 27.
    one-group pretest-posttest designO X O … a single group is pretested ( O ), exposed to a treatment ( X ) and, then, is posttested ( O )
  • 28.
    static group comparisonX 1 O X 2 O … involves at least two groups ( X ), one receiving a new, or experimental treatment ( X 1 ) and another receiving a traditional, or control treatment ( X 2 ) and, then, are posttested ( O )
  • 29.
    “ True” experimentsdefined An experiment that utilizes random assignment to conditions in an effort to ensure that the participants in each condition are statistically identical. In doing so, any differences observed in the dependent variable are attributable only to the presence/absence of the independent variable. Campbell & Stanley’s taxonomy RO 1 X O 2 RO 3 O 4 where R = random assignment, O = observation, X = treatment
  • 30.
    types of trueexperimental designs pretest-posttest control group design R O X 1 O R O X 2 O
  • 31.
    … at leasttwo groups are formed by random assignment ( R ), administered a pretest ( O ), receive different treatments ( X 1 , X 2 ), are administered a posttest, and posttest scores are compared to determine effectiveness of treatments
  • 32.
    posttest-only control groupdesign R X 1 O R X 2 O
  • 33.
    … at leasttwo groups are formed by random assignment ( R ), receive different treatments ( X 1 , X 2 ), are administered a posttest, and posttest scores are compared to determine effectiveness of treatments
  • 34.
    Solomon four-group designR O X 1 O R O X 2 O R X 1 O R X 2 O
  • 35.
    … four groupsare formed by random assignment ( R ) of participants, two groups are pretested ( O ) and two are not, one pretested and one unpretested group receive the experimental treatments ( X 1 , X 2 ), each group is are administered a posttest on the dependent variable, and posttest scores are compared to determine effectiveness of treatments
  • 36.
    factorial designs …involve two or more independent variables with at least one independent variable being manipulated by the researcher
  • 37.
    Experimental Design FactorialDesign Diagram Independent Variable #1: Teaching Method Independent Variable #2: Aptitude Randomly assigned 3 rd graders scoring below 60 on an aptitude test. Randomly assigned 3 rd graders scoring below 60 on an aptitude test. Randomly assigned 3 rd graders scoring about 85 on an aptitude test. Randomly assigned 3 rd graders scoring above 85 on an aptitude test. Reading/Lecture/Etc. Lecture Only High Low
  • 38.
    Independent Variable #1 Teaching Method How many possible Teaching Methods are there? Which will be the methods used in the study? If more than one will be used, each method may be considered a factor of the variable known as Teaching Method. Teaching Method Lecture only Lecture & Small Group Discussion
  • 39.
    Independent Variable #2 Aptitude How many possible levels of aptitude are there? How many may be represented in the group of subjects participating in the study? Once identified, levels of aptitude may be considered factors of the variable known as Aptitude. Low High Aptitude
  • 40.
    Lecture only Lecture& Small Group Discussion Low High
  • 41.
    examples of factorialdesigns two-by-two factorial design (four cells) 2 X 2 … two types of factors (e.g., method of instruction) each of which has two levels (e.g., traditional vs. innovative)
  • 42.
    A 2 X2 factorial design… Independent Variable A B Dependent Variable O O Group #1 Group #2 Group #3 Group #4 Cells not manipulated manipulated
  • 43.
    A 2 X2 factorial design… A No interaction between factors B
  • 44.
    A 2 X2 factorial design… A No interaction between factors B
  • 45.
    A 2 X2 factorial design… A Interacting factors B
  • 46.
    A 2 X2 factorial design… A Interacting factors B
  • 47.
    two-by-three factorial design (six cells) 2 X 3 … two types of factors (e.g., motivation; interest) each of which has three levels (e.g., high, medium, low)
  • 48.
    Single-subject experimental designs…1. A – B – A withdrawal 2. multiple baseline designs 3. alternating treatments designs
  • 49.
    simple A –B design … baseline measurements ( O ) are repeatedly made until stability is established, then the treatment ( X ) is introduced and an appropriate number of measurements ( O ) are made during treatment implementation
  • 50.
    simple A –B design O O O X O X O X O baseline treatment phase phase A | B
  • 51.
    A – B– A withdrawal designs … baseline measurements ( O ) are repeatedly made until stability is established, then the treatment ( X ) is introduced and an appropriate number of measurements ( O ) are made during treatment implementation, followed by an appropriate number of baseline measurements ( O ) to determine stability of treatment ( X )
  • 52.
    A – B– A withdrawal designs O O O X O X O X O O O baseline treatment baseline phase phase phase A | B | A
  • 53.
    multiple-baseline designs …used when a return to baseline conditions is difficult or impossible since treatment effects oftentimes do not disappear when a treatment is removed
  • 54.
    …“ multiple” refersto the study of more than one behavior, participant, or setting
  • 55.
    … instead ofcollecting baseline data on one specific behavior, data are collected on: (1) several behaviors for one participant, (2) one behavior for several participants, or (3) one behavior and one participant in several settings
  • 56.
    … then, overa period of time, the treatment is systematically applied to each behavior (or participant, or setting) one at a time until all behaviors (or participants or settings) have been exposed to the treatment
  • 57.
    multiple baseline designExample: one treatment for three behaviors in three settings behavior 1 O O OXOXOXOXOXOXOXOXOXOXOXO setting #1 behavior 2 O O O O O OXOXOXOXOXOXOXO setting #2 behavior 3 O O O O O O O O OXOXOXO setting #3 A B the baseline remains the same… … while the treatment is applied at other settings
  • 58.
    Threats to validity…… internal : factors other than the independent variable that affect the dependent variable … external : factors that affect the generalizability of the study to groups and settings beyond those of the experiment
  • 59.
    Threats to internalvalidity… 1. history 2. maturation 3. testing 4. instrumentation 5. statistical regression 6. differential selection of participants 7. mortality 8. selection-maturation interaction
  • 60.
    history … theoccurrence of events that are not part of the experimental treatment but that occur during the study and affect the dependent variable
  • 61.
    maturation … thephysical, intellectual, and emotional changes that occur naturally in a study’s participants over a period of time
  • 62.
    testing … refersto improved scores on a posttest as a result of having taken a pretest
  • 63.
    instrumentation … theunreliability or lack of consistency in measuring instruments that can result in an invalid assessment of performance
  • 64.
    statistical regression …the tendency of participants who score highest on a test to score lower on a second, similar test and vice versa
  • 65.
    differential selection ofparticipants … the outcome when already formed groups are compared raising the possibility that the groups were different before a study even begins
  • 66.
    mortality … thecase in which participants drop out of a study which changes the characteristics of the groups and may significantly affect the study’s results
  • 67.
    selection-maturation interaction …if already-formed groups are used in a study, one group may profit more (or less) from a treatment or have an initial advantage because of maturation, history, or testing factors
  • 68.
    Threats to externalvalidity… 1. pretest-treatment interaction 2. selection-treatment interaction 3. multiple treatment interference 4. specificity of variables 5. treatment diffusion 6. experimenter effects 7. reactive effects
  • 69.
    pretest-treatment interaction …the situation when participants respond or react differently to a treatment because they have been pretested
  • 70.
    multiple-treatment interference …the situation when the same participants receive more than one treatment in succession
  • 71.
    selection-treatment interference …the situation when participants are not randomly selected for treatments
  • 72.
    specificity of variables… the situation when a study is conducted with (1) a specific kind of participant; (2) is based on a particular operational definition of the independent variable; (3) uses specific dependent variables; (4) transpires at a specific time; and, (5) under a specific set of circumstances
  • 73.
    treatment diffusion …the situation when different treatment groups communicate with and learn from each other
  • 74.
    experimenter effects …the situation when the researchers present potential threats to the external validity of their own studies
  • 75.
    reactive arrangements …the situation when a number of factors associated with the way in which a study is conducted interacts with or shapes the feelings and attitudes of the participants involved
  • 76.
    Types of reactivearrangements… … Hawthorne effect : any situation in which participants’ behavior is affected not by the treatment per se but by their knowledge of participating in a study … compensatory rivalry : the control group is informed that they will be the control group for a new, experimental study (“ John Henry effect ”)
  • 77.
    … placebo effect: the situation in which half of the participants receive no treatment but believe they are … novelty effect : the situation in which participant interest, motivation, or engagement increases simply because they are doing something different
  • 78.
    Controlling for extraneous(confounding) variables… 1. randomization 2. matching 3. comparing homogeneous groups or subgroups 4. using participants as their own controls 5. analysis of covariance (ANCOVA)
  • 79.
    randomization … theprocess of selecting and assigning participants in such a way that all individuals in the defined population have an equal and independent chance of being selected for the sample
  • 80.
    matching … atechnique for equating groups on one or more variables, usually the ones highly related to performance on the dependent variable (e.g., pairwise matching)
  • 81.
    comparing homogeneous groupsor subgroups … a technique to control an extraneous variable by comparing groups that are similar with respect to that variable (e.g., stratified sampling)
  • 82.
    using participants astheir own controls … exposing a single group to different treatments one treatment at a time
  • 83.
    Data analysis andinterpretation… for single-subject research … a visual inspection and analysis of graphical presentations of results … focuses upon: adequacy of the design; an assessment of treatment effectiveness ( clinical vs. statistical significance )
  • 84.
    Mini-Quiz… True andfalse… … causal-comparative studies attempt to identify the cause-effect relationships; correlational studies do not True
  • 85.
    … causal-comparative studiestypically involve two (or more) groups and one independent variable, whereas correlational studies typically involve two (or more) variables and one group True
  • 86.
    … causal-comparative studiesinvolve relation, whereas correlational studies involve cause False
  • 87.
    … oftentimes, causal-comparativeresearch is undertaken because the independent variable could be manipulated but should not True
  • 88.
    … one ofthe most important reasons for conducting causal-comparative research is to identify variables worthy of experimental investigation True
  • 89.
    …“ lack ofcontrol” means that the researcher can and should manipulate the independent variable False
  • 90.
    … each groupin a causal-comparative study represents a different population True
  • 91.
    … the moresimilar two groups are on all relevant variables except the independent variable, the stronger the study is True
  • 92.
    … there israndom assignment to treatment groups from a single population in causal-comparative studies False
  • 93.
    … lack ofrandomization, manipulation of the independent variable, and control are all sources of weakness in a causal-comparative design True
  • 94.
    … matching, comparinghomogenous groups or subgroups, and covariate analysis are strategies that enable researchers to overcome problems of initial group differences on an extraneous variable True
  • 95.
    … interpretation ofthe findings in a causal-comparative study requires considerable caution because the cause may be the effect and the effect may be the cause True
  • 96.
    … extraneous variablesor confounding factors may be the real “cause” of both the independent and dependent variables True
  • 97.
    Fill in theblank… … groups selected for a causal-comparative study which differ on some independent variable and comparing them on some dependent variable comparison groups
  • 98.
    Fill in theblank… … unexplained variables that influence a dependent variable confounding factors extraneous variables
  • 99.
    Fill in theblank… … a method for controlling extraneous variables by comparing groups that are homogeneous with respect to the extraneous variable comparing homogeneous groups
  • 100.
    Fill in theblank… … a method for controlling extraneous variables by forming subgroups within each group that represent all levels of the control variable comparing homogeneous subgroups
  • 101.
    Fill in theblank… … a statistical tool to determine the effects of the independent variable and the control variable on the dependent variable, both separately and in combination factorial analysis of variance
  • 102.
    Fill in theblank… … the descriptive statistic indicating the average performance of a group on a measure of some variable mean
  • 103.
    Fill in theblank… … the descriptive statistic indicating how clustered or spread out around the mean a set of scores is standard deviation
  • 104.
    Fill in theblank… … the inferential statistic determining whether there is a significant difference between the means of two groups t-test
  • 105.
    Fill in theblank… … the inferential statistic determining whether there is a significant difference between the means of three or more groups analysis of variance
  • 106.
    Fill in theblank… … the inferential statistic determining whether there is a greater than expected difference among group frequencies chi squared
  • 107.
    Fill in theblank… … activities by which a researcher endeavors to ensure that the results of a causal-comparative study are not tainted by extraneous variables control