Upcoming SlideShare
×

Grp presentation chap 13

4,152 views

Published on

2 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
4,152
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
195
0
Likes
2
Embeds 0
No embeds

No notes for slide
• Uniqueness in terms in which researchers managed to manipulate variable and able to test theory and effectiveness of treatment based on the outcome.Independent variable a.k.a. experimental or treatment variable (e.g. methods of instruction, types of assignment, learning materials rewards given to students and types of questions asked by teacher)Dependent variable a.k.a. criterion or outcome variable (e.g. achievement, interest in a subject, attention span, motivation and attitudes toward school)Enables researchers to go beyond description and prediction, beyond the identification of relationships, to at least a partial determination of what causes them.
• Experimental research = try something and systematically observe what happens.Two basic conditions of formal experiments – 1st, at least 2 (or more) conditions or methods are compared to assess the effect of treatments (independent variable). 2nd, independent variable directly manipulated by researcher.
• Experimental group receives a treatment.Control/comparison group receives no/different treatment. Become yardstick to determine whether the treatment is effective/not.Researcher actively manipulates a treatment (independent variable) – deliberately &amp; directly determines what forms (treatment) and which group will get.Independent variables that can be manipulated – teaching method, type of counseling, learning activities, etc.Independent variables may be established in several ways – (i) one variable vs. another, (ii) presence vs. absence, (iii) varying degrees of the same form.
• Intended to eliminate the threat of extraneous or additional variables.Ensures that groups formed are equivalent at the beginning of an experiment.No guarantee of equivalent groups unless both groups (experimental &amp; control) are sufficiently large.
• Researchers exercise far more control – determine treatment, select sample, assign individuals to group, decide treatment to group, control factors beside treatment that may influence result, observe or measure the effect of the treatment.Researchers need to do their best to control (eliminate/minimize) the possible effect of the threat.e.g. effect of 2 different methods of instruction on students attitudes towards history but did not make sure that the groups involved were equivalent in ability.
• Random assigned enable researchers to assume that the group are equivalent. (one of the uniqueness + characteristics of experimental research).Eliminate possible effects of a variable by removing it from the study (e.g. gender might influence outcome, restrict it).Involve the variable and analyze the effect of it on the outcome (e.g. effects of both gender and method)Matched be made on certain variables of interest (e.g. age affect the outcome, match according to age and randomly assign to comparison group).Subject = students (e.g. assessment of behavior before and after treatment to see whether changes occur)Equate groups statistically based on pretest or other variables.
• Poor as these designs has difficulty assessing the effectiveness of the independent variable (treatment).
• One group only (experimental group) that received treatment. No control/comparison group = to effectiveness cannot be measured.No pretest, researcher knows nothing about the subject before treatment thus does not know whether it is effective or not.
• Pretest exist, so does nine uncontrolled-for threats (history, maturation, instrument decay, data collector characteristics, data collector bias, testing, statistical regression, attitude of subjects &amp; implementation.Researcher would not know if any differences between pretest and posttest due to treatment given/threats.
• a.k.a. nonequivalent control group designSubjects are being formed but not randomly assigned.Diagrammed shows better control (history, maturation, testing &amp; regression) but still not a good design as the possibility of other threats (mortality, location &amp; subject characteristics) occur.
• ? Better control = changed being analyzed but still remain a threat as it depends on initial performance (pretest improve or less).
• ? True=random assignment to treatment (independent variable) group.Random assignment best tool to control threat to internal validity.
• Two groups – experimental and control/comparison group which is formed by random assignment.There are still threats but can sometimes be controlled by appropriate modifications.Important to keep clear distinction between random selection and random assignment.Random selection is intended to provide a representative sample.Random assignment is intended to equate groups and often is not accompanying by random selection.
• X1 represents exposure to treatment (independent variable).O refers to the measurement of the dependent variable (outcome).R represents random assignment of individual to groups.X2 represents control group.
• Differs from the previous design solely in the use of a pretest.However, pretest may alert members of the experimental group thereby causing them to do better (or poorly).
• Attempt to eliminate the possible threat of a pretest.2 groups being pretested but 2 not.One pretested group, one un-pretested groups is exposed to experimental treatment.All are post-tested.Requires a considerable amount of energy and effort.
• Variables can be based on previous research, theory and/or the experience of the researcher.
• Mechanical = process of pairing two persons whose scores on a particular variable are similar.2 problems in mechanical matching: (1) difficult to match on more than 2 or 3 variables, (2) inevitable that some subjects must be eliminated from the study because no “matches” for them can be found.Statistical = subject is given a “predicted” score on the dependent variable, based on the correlation between the dependent variable (outcome) and the variable/(s) on which the subjects are being matched. The difference between the predicted and actual scores for each individual is then used to compare experimental and control groups.When pretest is used as the matching variable, the difference between the predicted and actual score called a regressed gain score.Gain scores (post-test minus pretest score)Mechanical vs. Statistical = one member of each matched pair is randomly assigned to the experimental group, the other to control group vs. sample is divided randomly at the outset, and the statistical adjustments are made after all data have been collected.
• Do not rely on random assignment but other techniques to control/at least reduce threats to internal validity.
• Researcher matches the subjects in the experimental &amp; control groups on certain variables but had no assurance whether they are equivalent on others.Reason: even though matched, subjects already are in intact groups.The correlation between matching and dependent variable should be fairly substantial.
• Another technique for equating experimental &amp; comparison groups.Method X is superior for both groups
• Grp presentation chap 13

1. 1. Experimental Research Chapter Thirteen
2. 2. Experimental Research• Chapter Thirteen
3. 3. Uniqueness of Experimental Research• Experimental Research is unique in two important respects: 1) Only type of research that attempts to influence a particular variable 2) Best type of research for testing hypotheses about cause- and-effect relationships• Experimental Research looks at the following variables: • Independent variable (treatment) • Dependent variable (outcome)
4. 4. Major Characteristics of Experimental Research• The researcher manipulates the independent variable.• They decide the nature and the extent of the treatment.• After the treatment has been administered, researchers observe or measure the groups receiving the treatments to see if they differ.• Experimental research enables researchers to go beyond description and prediction, and attempt to determine what caused effects.
5. 5. Essential Characteristics of Experimental ResearchComparison of Groups:• The experimental group receives a treatment of some sort while the control group receives no treatment.• Enables the researcher to determine whether the treatment has had an effect or whether one treatment is more effective than another.Manipulation of the Independent Variable:• The researcher deliberately and directly determines what forms the independent variable will take and which group will get which form.
6. 6. Essential Characteristics of Experimental ResearchRandomization• Random assignment is similar but not identical to random selection.• Random assignment means that every individual who is participating in the experiment has an equal chance of being assigned to any of the experimental or control groups.• Random selection means that every member of a population has an equal chance of being selected to be a member of the sample.• Three things occur with random assignments of subjects: 1) It takes place before the experiment begins 2) Process of assigning the groups takes place 3) Groups should be equivalent
7. 7. Control of Extraneous Variables• The researcher has the ability to control many aspects of an experiment.• It is the responsibility of the researcher to control for possible threats to internal validity.• This is done by ensuring that all subject characteristics that might affect the study are controlled.
8. 8. Most Common Ways to Eliminate Threats• Randomization• Hold certain variables constant• Build the variable into the design• Matching• Use subjects as their own control• Analysis of Covariance (ANCOVA)
9. 9. Poor Experimental Designs• The following designs are considered weak since they do not have built-in controls for threats to internal validity – The One-Shot Case Study • A single group is exposed to a treatment and its effects are assessed – The One-Group-Pretest-Posttest Design • Single group is measured both before and after a treatment exposure – The Static-Group Comparison Design • Two intact groups receive two different treatments
10. 10. The One-Shot Case Study A single measure is recorded after the treatment in administered. Study lacks any comparison or control of extraneous influences. To remedy this design, a comparison could be made with another group. Diagrammed as:
11. 11. The One-Group Pretest-Posttest Design Subjects are measured before and after treatment is administered. Uncontrolled-for threats to internal validity exist. To remedy this design, a comparison group could be added. Diagrammed as:
12. 12. The Static-Group Comparison Design  Use of 2 existing, or intact groups.  Experimental group is measured after being exposed to treatment.  Control group is measured without having been exposed to the treatment.  Diagrammed as:
13. 13. The Static-Group Pretest-Posttest Design Pretest is given to both groups. “Gain” or “change” = pretest score - posttest score. Better control of subject characteristics threat. A pretest raises the possibility of a testing threat.
14. 14. True Experimental Designs• The essential ingredient of a true experiment is random assignment of subjects to treatment groups• Random assignments is a powerful tool for controlling threats to internal validity – The Randomized Posttest-only Control Group Design • Both groups receiving different treatments – The Randomized Pretest-Posttest Control Group Design • Pretest is included in this design – The Randomized Solomon Four-Group Design • Four groups used, with two pre-tested and two not pre-tested
15. 15. The Randomized Posttest-Only Control Group Design Experimental group tested after treatment exposure. Control group tested at the same time without exposure to experimental treatment. Includes random assignment to groups. Threats to internal validity – mortality, attitudinal, implementation, data collector bias, location and history.
16. 16. Example of a Randomized Posttest- Only Control Group Design
17. 17. The Randomized Pretest-Posttest Control Group Design Experimental group tested before and after treatment exposure. Control group tested at same two times without exposure to experimental treatment. Includes random assignment to groups. Pretest raises the possibility of a pretest treatment interaction threat.
18. 18. Example of a Randomized Pretest- Posttest Control Group Design
19. 19. The Randomized Solomon Four- Group Design Combines pretest-posttest with control group design and the posttest-only with control group design. Provides means of controlling the interactive test effect and other sources of extraneous variation. Does include random assignment. Weakness: requires a large sample.
20. 20. Example of a Randomized Solomon Four-Group Design
21. 21. Random Assignment with Matching• To increase the likelihood that groups of subjects will be equivalent, pairs of subjects may be matched on certain variables.• Members of matched groups are then assigned to experimental or control groups.• Matching can be mechanical or statistical.
22. 22. Mechanical and Statistical Matching• Mechanical matching is a process of pairing two persons whose scores on a particular variable are similar.• Statistical matching does not necessitate a loss of subjects, nor does it limit the number of matching variables. – Each subject is given a “predicted” score on the dependent variable, based on the correlation between the dependent variable and the variable on which the subjects are being matched. – The difference between the predicted and actual scores for each individual is then used to compare experimental and control groups.
23. 23. A Randomized Posttest-Only Control Group Design
24. 24. Quasi-Experimental Designs• Do not include the use of random assignments but use other techniques to control for threats to internal validity: – The Matching-Only Design • Similar except that no random assignment occurs – Counterbalanced Design • All groups are exposed to all treatments but in a different order – Time-Series Design • Involves repeated measures over time, both before and after treatment
25. 25. The Matching-Only Design Random assignment is not used. An alternative to random assignment of subjects but never a substitute for random assignment.
26. 26. Counterbalanced Designs Each group is exposed to all treatments but in a different order. The effectiveness of the various treatment can be determined by comparing the average score for all groups on the posttest for each treatment. e.g. Results (Means) from a Study Using a Counterbalanced Design.
27. 27. Time-Series Design Involves periodic measurements on the dependent variable for a group of test units. After multiple measurements, experimental treatment is administered (or occurs naturally). After the treatment, periodic measurements are continued in order to determine the treatment effect. The threats to internal validity – history, instrumentation, and testing. Infrequently used due to extensive amount of data collection.
28. 28. Possible Outcome Patterns in a Time-Series Design
29. 29. Factorial Designs• Factorial Designs extend the number of relationships that may be examined in an experimental study.• They are modifications of either the posttest-only control group or pretest-posttest control group designs which permit the investigation of additional independent variables.• They also allow a researcher to study the interaction of an independent variable with one or more other variables (moderator variable).
30. 30. Using a Factorial Design to StudyEffects of Method and Class Size on Achievement
31. 31. Illustration of Interaction and NoInteraction in a 2 by 2 Factorial Design
32. 32. Example of a 4 by 2 Factorial Design
33. 33. Controlling Threats to Internal Validity• Subject Characteristics • Testing• Mortality • History• Location • Maturation• Instrument decay • Attitudinal• Data Collector Characteristics • Regression• Data Collector bias • Implementation The above must be controlled to reduce threats to internal validity
34. 34. Effectiveness of Experimental Designs in Controlling Threats to Internal Validity KEY: (++) = strong control, threat unlikely to occur; (+) = some control, threat may possibly occur; (–) = weak control, threat likely to occur; (?) = can’t determine; (NA) = threat does not apply Subject Instru- Data Collec- Charac- Morta- Loca- ment tor Charac- Data Col- Matur- Atti- Regres- Implemen-Design teristics lity tion Decay teristics lector Bias Testing History ation tudinal sion tationOne-shot case study – – – (NA) – – (NA) – – – – –One group pre- posttest – ? – – – – – – – – – –Static group comparison – – – + – – + ? + – – –Randomized post- test-only control group ++ + – + – – ++ + ++ – ++ –Randomized pre- post-test control group ++ + – + – – + + ++ – ++ –Solomon four- group ++ ++ – + – – ++ + ++ – ++ –Randomized posttest only control group with matched subjects ++ + – + – – ++ + ++ – ++ –Matching-only pre-posttest control group + + – + – – + + + – + –Counterbalanced ++ ++ – + – – – ++ ++ ++ ++ –Time-series ++ – + _ – – – – + – ++ –Factorial with randomization ++ ++ – ++ – – + + ++ – ++ –Factorial without randomization ? ? – ++ – – + + + – ? –
35. 35. Evaluating the Likelihood of a Threat to Internal Validity Procedures in assessing the likelihood of a threat to internal validity – Step 1: Ask: What specific factors either are known to affect the dependent variable or may logically be expected to affect this variable? Step 2: Ask: What is the likelihood of the comparison groups differing on each of these factors? Step 3: Evaluate the threats on the basis of how likely they are to have an effect, and plan to control for them.
36. 36. Guidelines for Handling InternalValidity in Comparison Group Studies