Types of research design experiments


Published on

Published in: Technology, Health & Medicine
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Types of research design experiments

  1. 1. Types of research design – experiments <ul><ul><ul><li>Chapter 8 in Babbie & Mouton (2001) </li></ul></ul></ul><ul><li>Introduction to all research designs </li></ul><ul><li>All research designs have specific objectives they strive for </li></ul><ul><li>Have different strengths and limitations </li></ul><ul><li>Have validity considerations </li></ul>
  2. 2. Validity considerations <ul><ul><li>When we say that a knowledge claim (or proposition) is valid, we make a JUDGEMENT about the extent to which relevant evidence supports that claim to be true </li></ul></ul><ul><ul><li>Is the interpretation of the evidence given the only possible one, or are there other plausible ones? </li></ul></ul><ul><ul><li>&quot;Plausible rival hypotheses&quot; = potential alternative explanations/claims </li></ul></ul><ul><ul><ul><li>e.g. New York City's &quot;zero tolerance&quot; crime fighting strategy in the 1980s and 1990s - the reverse of the &quot;broken windows&quot; effect </li></ul></ul></ul>
  3. 3. The logic of causal social research in the controlled experiment <ul><li>Explanatory rather than descriptive </li></ul><ul><li>Different from correlational research - one variable is manipulated (IV) and the effect of that manipulation observed on a second variable (DV) </li></ul><ul><li>If … then …. </li></ul><ul><li>E.g. </li></ul><ul><ul><li>&quot;Animals respond aggressively to crowding&quot; (causal) </li></ul></ul><ul><ul><li>&quot;People with premarital sexual experience have more stable marriages&quot; (noncausal) </li></ul></ul>
  4. 4. Three pairs of components: <ul><li>Independent and dependent variables </li></ul><ul><li>Pre-testing and post-testing </li></ul><ul><li>Experimental and control groups </li></ul>
  5. 5. Components <ul><li>Variables </li></ul><ul><ul><li>Dependent (DV) </li></ul></ul><ul><ul><li>Independent (IV) </li></ul></ul><ul><li>Pre-testing and post-testing </li></ul><ul><ul><li>O X O </li></ul></ul><ul><li>Experimental and control groups </li></ul><ul><ul><li>To off-set the effects of the experiment itself; to detect effects of the experiment itself </li></ul></ul>
  6. 6. The generic experimental design: <ul><li>R O 1 X O 2 </li></ul><ul><li>R O 3 O 4 </li></ul><ul><li>The IV is an active variable; it is manipulated </li></ul><ul><li>The participants who receive one level of the IV are equivalent in all ways to those who receive other levels of the IV </li></ul>
  7. 7. Sampling <ul><li>1. Selecting subjects to participate in the research </li></ul><ul><ul><li>Careful sampling to ensure that results can be generalized from sample to population </li></ul></ul><ul><ul><li>The relationship found might only exist in the sample; need to ensure that it exists in the population </li></ul></ul><ul><ul><li>Probability sampling techniques </li></ul></ul>
  8. 8. Sampling <ul><li>2. How the sample is divided into two or more groups is important </li></ul><ul><ul><li>to make the groups similar when they start off </li></ul></ul><ul><ul><li>randomization - equal chance </li></ul></ul><ul><ul><li>matching - similar to quota sampling procedures </li></ul></ul><ul><ul><li>match the groups in terms of the most relevant variables; e.g. age, sex, and race </li></ul></ul>
  9. 9. Variations on the standard experimental design <ul><li>One-shot case study </li></ul><ul><li>X O </li></ul><ul><li>No real comparison </li></ul>
  10. 10. A famous one-group posttest-only design <ul><ul><li>Milgram's study on obedience </li></ul></ul><ul><ul><li>Obedience to authority </li></ul></ul><ul><ul><li>The willingness of subjects to follow E's orders to give painful electrical shocks to another subject </li></ul></ul><ul><ul><li>A real, important issue here: how could &quot;ordinary&quot; citizens, like many Germans during the Nazi period, do these incredibly cruel and brutal things? </li></ul></ul><ul><ul><li>If a person is under allegiance to a legitimate authority, under what conditions will the person defy the authority if s/he is asked to carry out actions clearly incompatible with basic moral standards? </li></ul></ul>
  11. 11. One-group pre-test post-test design <ul><li>O 1 X O 2 </li></ul>
  12. 12. Example <ul><li>We want to find out whether a family literacy programme enhances the cognitive development of preschool-age children. </li></ul><ul><li>Find 20 families with a 4-year old child, enrol the family in a high-quality family literacy programme </li></ul><ul><li>Administer a pretest to the 20 children - they score a mean of say 50 on the cognitive test </li></ul><ul><li>The family participates in the programme for twelve months </li></ul><ul><li>Administer a post-test to the 20 children; now they score 75 on the test - a gain of 25 </li></ul>
  13. 13. Two claims/conclusions: <ul><li>1 The children gained 25 points on average in terms of their cognitive performance </li></ul><ul><li>2 the family literacy programme caused the gain in scores </li></ul><ul><li>VALIDITY - rival explanations </li></ul>
  14. 14. Static-group comparison <ul><li>X O </li></ul><ul><li> O </li></ul>
  15. 15. Evaluating research (experiments) <ul><li>We know the structure of research </li></ul><ul><li>We understand designs </li></ul><ul><li>We know the requirements of &quot;good&quot; research </li></ul><ul><li>Then we can evaluate a study </li></ul><ul><li>Is it good? Can we believe its conclusions? </li></ul><ul><li>Back to plausible rival hypotheses </li></ul>
  16. 16. Validity in designs <ul><li>If the design is not valid, then the conclusions drawn are not supported; it is like not doing research at all </li></ul><ul><li>Validity of designs come in two parts: </li></ul><ul><ul><li>Internal validity </li></ul></ul><ul><ul><ul><li>can the design sustain the conclusions? </li></ul></ul></ul><ul><ul><li>External validity </li></ul></ul><ul><ul><ul><li>can the conclusions be generalized to the population? </li></ul></ul></ul>
  17. 17. Internal validity <ul><li>Each design is only capable of supporting certain types of conclusions </li></ul><ul><ul><li>e.g. only experiments can support conclusions about causality </li></ul></ul><ul><li>Says nothing about if the results can be applied to the real world (generalization) </li></ul><ul><li>Generally, the more controlled the situation, the higher the internal validity </li></ul><ul><li>The conclusions drawn from experimental results may not accurately reflect hat has gone on in the experiment itself </li></ul>
  18. 18. Sources of internal invalidity <ul><li>These sources often discussed as part of experiments, but can be applied to all designs (e.g. see reactivity) </li></ul><ul><li>History </li></ul><ul><ul><li>Historical events may occur that will be confounded with the IV </li></ul></ul><ul><ul><li>Especially in field research (compare the control in a laboratory, e.g. nonsense syllables in memory studies </li></ul></ul>
  19. 19. Maturation <ul><li>Changes over time can be caused by a natural learning process </li></ul><ul><li>People naturally grow older, tired, bored, over time </li></ul>
  20. 20. Testing (reactivity) <ul><li>People realize they are being studied, and respond the way they think is appropriate </li></ul><ul><li>The very act of studying something may change it </li></ul><ul><li>In qualitative research, the &quot;on stage&quot; effects </li></ul>
  21. 21. The Hawthorne studies <ul><li>Improved performance because of the researcher's presence - people became aware that they were in an experiment, or that they were given special treatment </li></ul><ul><li>Especially for people who lack social contacts, e.g. residents of nursing homes, chronic mental patients </li></ul>
  22. 22. Placebo effect <ul><li>When a person expects a treatment or experience to change her/him, the person changes, even when the &quot;treatment&quot; is know to be inert or ineffective </li></ul><ul><li>Medical research </li></ul><ul><li>&quot;The bedside manner&quot;, or the power of suggestion </li></ul>
  23. 23. Experimenter expectancy <ul><li>Pygmalion effect - self-fulfilling prophecies of e.g. teachers' expectancies about student achievement </li></ul><ul><li>Experimenters may prejudge their results - experimenter bias </li></ul><ul><li>Double blind experiments: </li></ul><ul><li>Both the researcher and the research participant are &quot;blind&quot; to the purpose of the study. </li></ul><ul><li>They don't know what treatment the participant is getting </li></ul>
  24. 24. Instrumentation <ul><li>Instruments with low reliability lead to inaccurate findings/missing phenomena </li></ul><ul><li>e.g. human observers become more skilled over time (from pretest to posttest) and so report more accurate scores at later time points </li></ul>
  25. 25. Statistical regression to the mean <ul><li>Studying extreme scores can lead to inflated differences, which would not occur in moderate scorers </li></ul>
  26. 26. Selection biases <ul><li>Selection subjects for the study, and assigning them to E-group and C-group </li></ul><ul><li>Look out for studies using volunteers </li></ul>
  27. 27. Attrition <ul><li>Sometimes called experimental (or subject) mortality </li></ul><ul><li>If subjects drop out, it creates a bias to those who did not </li></ul><ul><ul><li>e.g. comparing the effectiveness of family therapy with discussion groups for treatment of drug addiction </li></ul></ul><ul><ul><li>addicts with the worst prognosis more likely to drop out of the discussion group </li></ul></ul><ul><ul><li>will make it look like family therapy does less well than discussion groups, because the &quot;worst cases&quot; were still in the family therapy group </li></ul></ul>
  28. 28. Diffusion or imitation of treatments <ul><li>When subject can communicate to each other, pass on some information about the treatment (IV) </li></ul>
  29. 29. Compensation <ul><li>In real life, people may feel sorry for C-group who does not get &quot;the treatment&quot; - try to give them something extra </li></ul><ul><ul><li>e.g. compare usual day care for street children with an enhanced day treatment condition </li></ul></ul><ul><ul><li>service providers may very well complain about inequity, and provide some enhanced service to the children receiving usual care </li></ul></ul>
  30. 30. Compensatory rivalry <ul><li>C-group may &quot;work harder&quot; to compete better with the E-group </li></ul>
  31. 31. Demoralization <ul><li>Opposite to compensatory rivalry </li></ul><ul><li>May feel deprived, and give up </li></ul><ul><ul><li>e.g. giving unemployed high school dropouts a second chance at completing matric via a special education programme </li></ul></ul><ul><li>if we assign some of them to a control group, who receive &quot;no treatment&quot;, they may very well become profoundly demoralized </li></ul>
  32. 32. External validity <ul><li>Can the findings of the study be generalized? </li></ul><ul><li>Do they speak only of our sample, or of a wider group? </li></ul><ul><li>To what populations, settings, treatment variables (IV's), and measurement variables can the finding be generalized? </li></ul>
  33. 33. External validity <ul><li>Mainly questions about three aspects: </li></ul><ul><ul><li>Research participants </li></ul></ul><ul><ul><li>Independent variables, or manipulations </li></ul></ul><ul><ul><li>Dependent variables, or outcomes </li></ul></ul><ul><li>Says nothing about the truth of the result that we are generalizing </li></ul><ul><li>External validity only has meaning once the internal validity of a study has been established </li></ul><ul><li>Internal validity is the basic minimum without which an experiment is uninterpretable </li></ul>
  34. 34. External validity <ul><li>Our interest in answering research questions is rarely restricted to the specific situation studied - our interest is in the variables, not the specific details of a piece of research </li></ul><ul><li>But studies differ in many ways, even if they study the same variables: </li></ul><ul><ul><li>operational definitions of the variables </li></ul></ul><ul><ul><li>subject population studied </li></ul></ul><ul><ul><li>procedural details </li></ul></ul><ul><ul><li>observers </li></ul></ul><ul><ul><li>settings </li></ul></ul><ul><li>Generally bigger samples with valid measures lead to better external validity </li></ul>
  35. 35. Sources of external invalidity <ul><li>Subject selection - Selecting a sample which does not represent the population well, will prevent generalization </li></ul><ul><li>Interaction between the testing situation and the experimental stimulus </li></ul><ul><li>When people have been sensitized to the issues by the pre-test </li></ul><ul><li>Respond differently to the questionnaires the second time (post-test) </li></ul><ul><li>Operationalization </li></ul>
  36. 36. Operationalization <ul><li>We take a variable with wide scope and operationalize it in a narrow fashion </li></ul><ul><li>Will we find the same results with a different operationalization of the same variable? </li></ul>
  37. 37. Field experiments <ul><li>&quot;natural&quot; - e.g. disaster research </li></ul><ul><li>Static-group comparison type </li></ul><ul><li>Non-equivalent experimental and control groups </li></ul>
  38. 38. Strengths and weaknesses <ul><li>Strengths </li></ul><ul><ul><li>Control </li></ul></ul><ul><ul><li>Manipulating the IV </li></ul></ul><ul><ul><li>Sorting out extraneous variables </li></ul></ul><ul><li>Weaknesses </li></ul><ul><ul><li>Articifiality - a generalization problem </li></ul></ul><ul><ul><li>Expense </li></ul></ul><ul><ul><li>Limited range of questions </li></ul></ul>
  39. 39. IN CONCLUSION <ul><li>Donald Campbell often cited Neurath's metaphor: </li></ul><ul><ul><li>&quot;in science we are like sailors who must repair a rotting ship while it is afloat at sea. We depend on the relative soundness of all other planks while we replace a particularly weak one. Each of the planks we now depend on we will in turn have to replace. No one of them is a foundation, nor point of certainty, no one of them is incorrigible&quot; </li></ul></ul>