• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Experimental Research Overview
 

Experimental Research Overview

on

  • 5,479 views

PowerPoint presentation created for graduate course in Research Methodologies. Very wordy and not my usual style, but had too much information to include to do much style-wise.

PowerPoint presentation created for graduate course in Research Methodologies. Very wordy and not my usual style, but had too much information to include to do much style-wise.

Statistics

Views

Total Views
5,479
Views on SlideShare
5,478
Embed Views
1

Actions

Likes
0
Downloads
268
Comments
0

1 Embed 1

https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Experimental Research Overview Experimental Research Overview Presentation Transcript

    • Experimental Research by Mary MacinTuesday, February 15, 2011 1
    • Experimental Research vs. Other Methods ❖ Can test for cause/effect relationships ❖ Manipulation of independent variable(s) Simply put: Decisions about the forms and values of the IV, as well as about which group receives which treatment are at the sole discretion of the researcherTuesday, February 15, 2011 2
    • Variables in Experimental Research ❖ Independent Variable: ❖ Experimental Variable, Cause, or Treatment ❖ The activity or characteristic the researcher believes makes a difference ❖ Dependent Variable: ❖ Criterion Variable, Effect, or Posttest ❖ Outcome of the study ❖ Difference in group(s) that occurs as a result of the manipulation of the IV ❖ Only constraint: must represent a measurable outcomeTuesday, February 15, 2011 3
    • Characteristics of Experimental Research ❖ Demanding & Productive, but... ❖ Produce the soundest evidence of hypothesized cause-effect relationships ❖ Difference between Correlational & Experimental Research: ❖ Correlational can be used to predict a specific score for a specific individual ❖ Experimental predicts more global results*Tuesday, February 15, 2011 4
    • Steps in Experimental Research Study 1. Select and define problem. 2. Select subjects and [measurement] instruments. 3. Select design. 4.Execute procedures. 5. Analyze data. 6.Formulate conclusions.Tuesday, February 15, 2011 5
    • Role of the Researcher ❖ Forms or selects groups ❖ Decides what will happen to each group ❖ Attempts to control all variables and factors ❖ Observes and measures effect on the groups Every effort is made to make sure the 2 groups have equivalent variables—except for the independent variable.Tuesday, February 15, 2011 6
    • Two Groups ❖ Experimental Group ❖ Receives the new treatment being investigated ❖ Control Group ❖ Receives a different treatment; or ❖ Receives same treatment as usual (i.e. is left alone) The Control Group is needed in order to identify/measure any differences observed as a result of the differing treatmentsTuesday, February 15, 2011 7
    • Potential Issues in Experimental Research ❖ Experimental treatment not given adequate time to take effect ❖ Experimental group should be exposed to treatment for a long enough period of time for the treatment to work ❖ Treatments received by the 2 groups are not “different enough” ❖ No difference between the groups will be found if the experimental treatment and the control treatment are too similarTuesday, February 15, 2011 8
    • Experimental Validity ❖ Experiments are considered valid if: ❖ The results obtained are only due to the manipulation of the independent variable ❖ Two conditions must be met: ❖ Experiment has internal validity ❖ Experiment has external validityTuesday, February 15, 2011 9
    • Internal Validity ❖ Observed differences on the dependent variable are the direct result of the researcher’s manipulation of the independent variable. ❖ Campbell & Stanley (1971) identified 8 threats to internal validity: ❖ History - becomes more likely the longer a study is; caused by external events. ❖ Maturation - physical/mental changes occurring in subjects over time; more likely to occur when study is extended over a long period of time. ❖ Testing (pretest sensitization) - result of higher scores on a posttest due to participants having taken a pretest; unlike above, more likely to occur when there are short intervals between testing. ❖ Instrumentation - lack of consistency between measuring instruments; data collection leads to unreliable/invalid results. ❖ Statistical Regression - tendency for some scores to move towards the mean score; participants who score the highest and lowest on a pretest are more likely to score lower and higher (respectively) on a posttest. ❖ Differential Selection of Subjects - differences already present between two pre-formed groups could account for differences in posttest results. ❖ Mortality (attrition) - occurs most often in long-term studies; refers to participants who drop out of a group potentially sharing some characteristic that affects the significance of the study.* ❖ Selection-Maturation Interaction, Etc. - if pre-formed groups are used, one group may be at an (dis)advantage due to factors of maturation; the “etc.” refers to the fact that selection can also interact in this way with other factors such as history, testing, instrumentation, etc.Tuesday, February 15, 2011 10
    • External Validity ❖ Results of the experiment are generalizable to groups and environments outside of the experiment; results of the study can be reconfirmed with other groups, in other settings, and at other times (if the conditions are similar to those present in the experiment). ❖ Bracht & Glass (1968) identified 6 threats to external validity: ❖ Pretest-Treatment Interaction - participants react differently to a treatment because they have been pretested; pretests may alert participants to the make-up of the treatment; therefore, results can only be generalized to other pretested groups. ❖ Multiple-Treatment Interference - the same participants receive the same treatment in succession; effects are carried-over from the first treatment making it hard to determine the effectiveness of the second treatment. ❖ Selection-Treatment Interaction - occurs when participants are not randomly selected for the treatments they receive; can occur when participants are a pre-formed group or an individual; limits the generalizability of the results. ❖ Specificity of Variables - does not depend on the experimental design chosen; threatens validity when a study is conducted: ❖ with a specific kind of subject; ❖ based on a particular definition of the independent variable; ❖ using specific measuring instruments; ❖ at a specific time; and ❖ under a specific set of circumstances. ❖ Experimenter Effects - experimenter unintentionally affects the implementation of the study’s procedures, the behavior of the participants, or the assessment of participant behavior, thereby affecting the results of the study. ❖ Reactive Arrangements - factors associated with how a study is conducted effectively influence the feelings and attitudes of the participants; affects generalizability of the results.Tuesday, February 15, 2011 11
    • Extraneous Variables ❖ The control of extraneous variables is vital to the success of an experiment. ❖ Extraneous variables can be controlled through: ❖ Randomization - subjects should be randomly selected for participation and randomly assigned to groups; randomizing selection should be attempted whenever possible ❖ Matching - researcher pairs up participants with matching (similar) scores or characteristics (gender, IQ, location), then randomly assigns each participant to a different group than their counterpart; this ensures that the pair with matching IQ scores are not in the same group ❖ Comparing homogenous groups or subgroups - group participants according to their similarity/fit into a variable subgroup (IQ, SAT score); randomly assign half of the subgroup to the experimental group, and the other half of the subgroup to the control group ❖ Using subjects as their own controls - the same participants get both treatments (one treatment at a time); controls for participant differences; can result (negatively) in carry-over effects between the treatments ❖ Analysis of covariance - statistically equate randomly formed groups on a particular variable; can be used to adjust for large differences in pretest scores between groupsTuesday, February 15, 2011 12
    • Group Designs ❖ Two classes of experimental designs: ❖ Single-Variable: one independent variable; IV is manipulated ❖ Three types— ❖ Pre-experimental ❖ True experimental* ❖ Quasi-experimental ❖ Factorial: two or more independent variables; at least one IV is manipulated ❖ Elaborate on single-variable designs; ❖ Investigates each variable independently and in interaction with other variables; ❖ Sky’s the limit**Tuesday, February 15, 2011 13
    • Pre-Experimental Designs ❖ One-Shot Case Study — ❖ One group exposed to one treatment then given posttest ❖ Don’t know level of group knowledge before the treatment! ❖ Sources of invalidity are not controlled! ❖ One-Group Pretest-Posttest Design — ❖ One group pretested, exposed to one treatment, then posttested ❖ Still a number of factors affecting validity that are not controlled! ❖ Other factors may influence any differences observed between the pretest and posttest ❖ Static-Group Comparison — ❖ At least two groups; first receives new treatment; second receives usual treatment; both posttested ❖ Purpose of control group is to show how the experimental (first) group would have performed had they not received the new treatment ❖ Effective only to the degree that the two groups are equal to each otherTuesday, February 15, 2011 14
    • True Experimental Designs ❖ Pretest-Posttest Control Group Design — ❖ At least two randomly-assigned groups; both pretested for dependent variable; one group then receives the new treatment; then both groups are posttested. ❖ Internal invalidity fully controlled by: random assignment, pretesting, & inclusion of a control group ❖ Potential risk of interaction between the pretest and the treatment* ❖ Posttest-Only Control Group Design — ❖ Same as pretest-posttest design, except there is no pretest ❖ Subjects randomly assigned; exposed to independent variable; then posttested ❖ Mortality is not controlled for (no pretest), but may not be a problem anyway ❖ Solomon Four-Group Design — ❖ Random assignment of participants to one of four groups ❖ Two groups are pretested; two groups are not pretested ❖ One pretested group & one unpretested group receive the experimental treatment ❖ All four groups are posttested ❖ Combination of the two designs (above) - eliminates both sources of internal invalidity!Tuesday, February 15, 2011 15
    • Quasi-Experimental Designs ❖ Nonequivalent Control Group Design — ❖ Two or more existing groups pretested; administered treatment; and posttested. ❖ Participants’ assignment to groups is not random; assignment of treatments to groups is random ❖ Invalidity sources include: regression, selection-treatment interactions (maturation, history, and testing) ❖ Time-Series Design — ❖ One group repeatedly pretested; administered treatment; repeatedly posttested. ❖ Elaboration of the one-group pretest-posttest design; involves testing (pre- and post-) more than once ❖ Advantage lies in confidence gained through significant improvement of group scores between pretests and posttests ❖ Counterbalanced Designs — ❖ All groups received all treatments; each group receives treatment in a different order than others. ❖ Any number of groups can be involved; limited only by the number of treatments; # of groups = # of treatments ❖ Order of each groups’ receipt of treatment is determined randomly; each group is posttested following each treatment ❖ Pretest usually not possible and/or feasible; often used on existing groups ❖ Weakness lies in potential for multiple-treatment interference; thus, should only be used when this is not a concernTuesday, February 15, 2011 16
    • Factorial Designs ❖ Two or more independent variables; at least one is manipulated by researcher ❖ Term “factorial” comes from the use of multiple variables with multiple levels ❖ 2 x 2 factorial design* ❖ Can get very complicated (2 x 3, 3 x 2, etc.)! ❖ Often employed after using a single-variable design; ❖ “Variables do not operate in isolation” ❖ Studies how variables behave at different levels**Tuesday, February 15, 2011 17
    • Single-Subject Experimental Designs ❖ Also referred to as “single-case experimental designs” ❖ Used when sample size = 1; or for multiple individuals considered as 1 group ❖ Variation of the time-series design ❖ Typically used as a study of behavioral change in an individual ❖ Participant is own control; exposed to both nontreatment & treatment phases; ❖ Individual’s performance measured repeatedly during all phases ❖ Nontreatment phase = A; Treatment phase = BTuesday, February 15, 2011 18
    • Validity in Single-Subject Experiments ❖ External Validity ❖ Frequent criticism due to lack of generalizability ❖ Can be counteracted through replication ❖ Internal Validity ❖ Repeated and Reliable Measurement ❖ If results are to be trusted, treatment must follow exact same procedures every time ❖ Baseline Stability ❖ Provides basis for assessing the effectiveness of the treatment; must do enough baseline measurements to establish a pattern* ❖ The Single Variable Rule ❖ Only one variable should be manipulated at any one time!Tuesday, February 15, 2011 19
    • Types of Single-Subject Designs ❖ A-B-A Withdrawal Designs -- ❖ The A-B* Design ❖ Establishment of baseline stability; treatment given ❖ Improvement during treatment = effectiveness of treatment ❖ The A-B-A Design ❖ Adds a second baseline measurement to the A-B design ❖ Improves validity IF behavior improves during the B phase, and subsequently deteriorates during the second A phase ❖ The A-B-A-B Design ❖ Adds a second treatment phase to the A-B-A design ❖ Could add strength to experiment IF behavior improves during treatment twice! ❖ Eliminates ethical concerns from A-B-A design (ending with participant not receiving potentially effective treatment)Tuesday, February 15, 2011 20
    • Types of Single-Subject Designs (cont’d) ❖ Multiple-Baseline Designs ❖ Alternative to the A-B design ❖ Used when unable to withdraw the treatment, or when it would be unethical to do so ❖ Three basic types: across behaviors, across subjects, and across settings* ❖ Alternating Treatments Design ❖ Only valid design for assessing effectiveness of 2+ treatments in a single-subject context ❖ Rapid alternation of treatments for a single subject ❖ Treatments are alternated randomly ❖ Notice: no withdrawal phase, no baseline phase. ❖ Allows for the study of multiple treatments quickly and efficiently ❖ Could introduce multiple-treatment interferenceTuesday, February 15, 2011 21
    • Data Analysis/Interpretation ❖ Typically involves graphically-represented results ❖ Design must be evaluated for adequacy; then treatment effectiveness is assessed ❖ Clinical Significance vs. Statistical Significance ❖ t and F tests can be used to test for statistical significanceTuesday, February 15, 2011 22
    • Replicating Results ❖ As results are replicated, confidence in the procedures used grows ❖ Direct replication ❖ Replication by the same investigator in the same setting ❖ [Note] the same or different participants may be used ❖ Simultaneous replication ❖ Same problem; same location; and same time ❖ Systematic replication ❖ Direct replication with different investigators, behaviors, or settings ❖ Clinical replication ❖ Treatment package with 2+ treatments.* ❖ Designed for participants with complex behavior disordersTuesday, February 15, 2011 23
    • Example of Experimental Research ❖ Brain-Computer Interface Project ❖ University of Illinois at Urbana-Champaign ❖ Collected brain signals through EEG ❖ Used one group of 9 individuals ❖ Allowed “practice” session before testing, but no pretest was conductedTuesday, February 15, 2011 24
    • Infamous Cases of Unethical Research ❖ Tuskegee Syphilis Study (1932-1972) ❖ Nearly 400 African-American men were infected with syphilis ❖ Study conducted by Public Health Service ❖ Led to the 1979 Belmont Report (modern foundation for ethical research of human subjects) ❖ Milgram Obedience to Authority Study (began 1961; made public 1963) ❖ Residents of New Haven, CT recruited to participate in a study of “memory and learning” ❖ Participants asked to inflict electric shocks in increasing voltages based on “learner’s” incorrect answers (maximum voltage of 450 volts) ❖ Study conducted at Yale University; intended to determine whether ordinary people would follow orders they considered immoral (i.e. Nazi Holocaust/Adolf Eichmann) ❖ Stanford Prison Experiment (1971) ❖ 24 students chosen as “prisoners,” while 9 “guards” were assigned to 3 shifts ❖ Shut down after 6 days (originally intended to take 2 weeks) due to a deterioration of the experiment’s conditions and structure ❖ Both prisoners and guards adapted to their given roles--guards becoming authoritarian and prisoners becoming passiveTuesday, February 15, 2011 25
    • References Gay, L. R. (1996). Educational research : competencies for analysis and application / L.R. Gay (5th ed.): Englewood Cliffs, N.J. : Merrill, 1996. Milgram experiment. (2011, February 7). In Wikipedia, The Free Encyclopedia. Retrieved from http:// en.wikipedia.org/w/index.php?title=Milgram_experiment&oldid=412574744. Stanford prison experiment. (2011, February 11). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Stanford_prison_experiment&oldid=413232983. Omar, C., Akce, A., Johnson, M., Bretl, T., Rui, M., Maclin, E. (2011). A Feedback Information- Theoretic Approach to the Design of Brain-Computer Interfaces. [Article]. International Journal of Human-Computer Interaction, 27(1), 5-23. doi: 10.1080/10447318.2011.535749. Tuskegee syphilis experiment. (2011, February 3). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Tuskegee_syphilis_experiment&oldid=411791432.Tuesday, February 15, 2011 26