Methods for Tina

576 views

Published on

Published in: Technology, Health & Medicine
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
576
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
12
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Methods for Tina

  1. 1. Some experimental methods in Educational Research Questions Design Analysis Adam Dubrowski, HSC Learning Institute
  2. 2. Not enough?
  3. 3. You need to perform a systematic review of literature
  4. 4. You need to generate an answerable question
  5. 5. You need to generate a
  6. 6. You need to design of a “clean” experiment
  7. 7. You need to be able to us e the stats
  8. 8. You need to generate an answerable question
  9. 9. Types of Questions <ul><li>Health Care Intervention (or treatment, prevention therapy): Determining the effect of different interventions </li></ul><ul><li>Harm (causation): Ascertaining the effect of potentially harmful agents </li></ul><ul><li>Prognosis : Estimating the future course of a patient’s disease or condition </li></ul><ul><li>Diagnosis (or assessment): Establishing the power of a diagnostic tool </li></ul><ul><li>Meaning : Describing, exploring, and explaining phenomena </li></ul><ul><li>Economics : Studying the economic efficiency of health care programs or interventions </li></ul>
  10. 10. Types of Questions <ul><li>Education : Determining the effect of educational interventions </li></ul>
  11. 11. Matching Question to Study Design <ul><li>Quantitative studies are most useful for answering questions of “how many” or “how much” </li></ul><ul><li>Qualitative studies are most appropriate for answering questions about how people “feel about” or “experience” certain situations and conditions. </li></ul>
  12. 12. Matching Question to Study Design <ul><li>Quantitative studies are most useful for answering questions of “how many” or “how much” </li></ul><ul><li>Qualitative studies are most appropriate for answering questions about how people “feel about” or “experience” certain situations and conditions. </li></ul>
  13. 13. Asking Structured Questions (WHY?)
  14. 14. Asking Structured Questions (WHY?)
  15. 15. Asking Structured Questions (WHY?)
  16. 16. Asking Structured Questions (WHY?)
  17. 17. Asking Structured Questions (WHY?)
  18. 18. Quantitative Questions (PICO) <ul><li>Population </li></ul><ul><li>Intervention or exposure </li></ul><ul><li>Comparison </li></ul><ul><li>Outcome </li></ul>
  19. 19. Quantitative Questions ( P ICO) <ul><li>The Population </li></ul><ul><ul><li>Who are the trainees? </li></ul></ul><ul><ul><li>Why are they enrolled? </li></ul></ul><ul><ul><li>Is there a particular age or sex grouping? </li></ul></ul><ul><ul><li>Is there a particular profession grouping? </li></ul></ul><ul><ul><li>What else? </li></ul></ul>
  20. 20. Quantitative Questions (P I CO) <ul><li>The Intervention </li></ul><ul><ul><li>What interventions are we interested in? </li></ul></ul><ul><ul><li>Examples? </li></ul></ul>
  21. 21. Quantitative Questions (PI C O) <ul><li>The Comparison </li></ul><ul><ul><li>What are the current educational practices or what are other possible educational practices that we are interested in comparing? </li></ul></ul>
  22. 22. Quantitative Questions (PIC O ) <ul><li>The Outcome </li></ul><ul><ul><li>What are the relevant consequences of the intervention in which we are interested? </li></ul></ul><ul><ul><ul><li>Satisfaction </li></ul></ul></ul><ul><ul><ul><li>Performance </li></ul></ul></ul><ul><ul><ul><ul><li>Individual </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Team </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Patient </li></ul></ul></ul></ul><ul><ul><ul><li>System changes </li></ul></ul></ul><ul><ul><ul><li>Economics </li></ul></ul></ul>
  23. 23. Quantitative Questions (PIC O ) <ul><li>Group work </li></ul>
  24. 24. Hypothesis
  25. 25. Hypothesis A hypothesis [from Greek] consists either of a suggested explanation of a phenomenon or of a reasoned proposal suggesting a possible relationship between multiple phenomena. In science: The Scientific Method requires that one can test a scientific hypothesis. Generally base such hypotheses on previous observations or on extensions of scientific theories.
  26. 26. Hypothesis <ul><li>For evaluation of a hypothesis, one needs to define operational terms. </li></ul><ul><li>“ I predict that A will lead to D only if B and C are in place” </li></ul><ul><li>A hypothesis should enable predictions by testing. </li></ul><ul><li>A hypothesis requires work by the researcher in order to either accept or reject it. </li></ul>
  27. 27. Hypothesis <ul><li>A hypothesis may take the form of asserting a causal relationship (such as &quot;A causes D&quot;) but not always. </li></ul><ul><li>For example, if a particular independent variable changes, then a certain dependent variable also changes. </li></ul><ul><li>However …… </li></ul><ul><li>… . what we do not know is whether there is a direct causal relationship, or whether there is another variable in the middle that causes the change. </li></ul>
  28. 28. Hypothesis: Variables An independent variable is the variable whose value one actively controls and can change. A dependent variable is the thing whose value then changes as a result. Example: In a study of how introduction of eLearning modules affect the adherence to a program, a researcher could compare the course drop out rate (the ? variables) with the presence of eLearning module (the ? Variable), and attempt to draw a conclusion.
  29. 29. Hypothesis: Variables An independent variable is the variable whose value one actively controls and can change. A dependent variable is the thing whose value then changes as a result.
  30. 30. Hypothesis testing In statistics, there are only two kinds of hypothesis that are being tested: The null hypothesis or H 0 . The alternative hypothesis or H 1 .
  31. 31. Hypothesis testing In education research, the null hypothesis may be used to test differences in intervention and control groups at the end of a program, and the assumption is that no difference exists between the two groups for the outcome variable being compared.
  32. 32. Not guilty unless proven otherwise
  33. 33. Hypothesis testing: Statistical significance The “p” value or alpha is the chance you are willing to take to reject the Ho [not guilty], while you actually should have retained it. The choice of the level is up to the experimenter and may be determined as a function of the phenomenon being studied.
  34. 34. Pause
  35. 35. Experimental design
  36. 36. You have performed a systematic review of literature
  37. 37. You have generated an answerable question
  38. 38. You have generated a
  39. 39. What is the next step?
  40. 40. You need to design of a “clean” experiment
  41. 41. Example You may wish to examine if a new hands on simulation based educational program leads to adherence to hand washing protocols [by nurses ….newly hired nurses …etc]. Is this a good research question? Does it have all the parts? What are the hypotheses? Ho H1 Experimental design
  42. 42. Example You may wish to examine if a new hands on simulation based educational program leads to adherence to hand washing protocols. Is this a good research question? OK, but not great Does it have all the parts? No control, participants, outcomes. What are the hypotheses? Ho: Old and new programs are the same H1: Old and new are different (note no directionality) Experimental design
  43. 43. To test this hypothesis requires 2 conditions to be met (Cook and Campbell; 1979) . 1. Changes in the outcome occur after , rather than before the institution of the program/intervention. 2. The program/intervention are the only reasonable explanation for the changes in the outcome. If there are any other explanations for the observed changes in outcomes the researcher cannot be confident that the presumed cause effect relationship is correct. How do you know if there are other explanations? Experimental design
  44. 44. Eliminating these alternative explanations is the purpose of a proper experimental design. These are also known as threats to internal validity (Cook and Campbell 1979). Experimental design
  45. 45. Minimizing Threats to Internal Validity: Argument . This is the least effective ways to argue threats to internal validity. - In a paper it is in the intro Design . This is by far the most powerful method to rule out alternative explanations. - In a paper it is in the methods Analysis . The researcher can use various statistical analysis performed on the collected data. - In a paper it is in the methods and results Experimental design
  46. 46. <ul><li>The three ways of Minimizing Threats to Validity are not mutually exclusive and a good research plan should make use of multiple methods for reducing threats. </li></ul>Experimental design
  47. 47. <ul><li>Design construction </li></ul><ul><li>Most research designs can be conceptualized and represented graphically from four basic elements </li></ul><ul><li>Time, randomization, groups, interventions, observations, etc. </li></ul><ul><li>XOXOXOXO </li></ul>Experimental design
  48. 48. <ul><li>Time . In design notation time is represented horizontally. </li></ul><ul><li>Intervention (s). In design notation the intervention is depicted with the symbol &quot;X&quot;. </li></ul><ul><li>Observation (s). Assessments and observations are </li></ul><ul><li>depicted as the symbol &quot;O&quot;. </li></ul><ul><li>XOXOXOXOX </li></ul>Experimental design
  49. 49. <ul><li>Groups. Each group is indicated on a separate line. </li></ul><ul><li>Most importantly however, the manner in which groups are assigned to the conditions can be indicated by a letter: </li></ul><ul><li>&quot;R&quot; represents random assignment, </li></ul><ul><li>&quot;N&quot; represents non-random assignment (i.e., a nonequivalent group or cohort) </li></ul><ul><li>&quot;C&quot; may represent an assignment based on a cutoff score. </li></ul><ul><li>Group 1: R O X O </li></ul><ul><li>Group 2: N O X O </li></ul>Experimental design
  50. 50. <ul><li>The most basic causal relationship between educational intervention and outcome can be described using the following annotation: </li></ul><ul><li>X O </li></ul><ul><li>This is the most simple design and serves as a starting point for the development of better strategies. </li></ul><ul><li>What is wrong with it? </li></ul><ul><li>No control for threats to internal validity: </li></ul><ul><li>Level before intervention? </li></ul><ul><li>Historical events? </li></ul>Experimental design
  51. 51. <ul><li>When the researcher is faced with being able to deliver an intervention to all participants one can include additional observations either before or after the intervention. </li></ul><ul><li>O O X O O </li></ul><ul><li>Provides a &quot;baseline&quot; </li></ul><ul><li>No intervention vs. intervention </li></ul><ul><li>Additional posttest assessments: decay, or a lag. </li></ul><ul><li>What is wrong with this deign? </li></ul><ul><li>Still we do not know if X (or the intervention) caused change in O </li></ul>Experimental design
  52. 52. <ul><li>Use a control group! </li></ul><ul><li>N O X O </li></ul><ul><li>N O O </li></ul><ul><li>The interpretation: The study was comprised of two groups, with non-random assignment of participants to each of the groups. There was an initial assessment before the intervention implementation. Subsequently, participants in the first group receive the intervention (indicated by X), while participants in the second group do not. Finally, all participants are reassessed. </li></ul><ul><li>Why is the first “O” important? </li></ul><ul><li>To ensure no bias </li></ul>Experimental design
  53. 53. <ul><li>In education especially the initial tests may be viewed as a contaminating factor. </li></ul><ul><li>R X O </li></ul><ul><li>R O </li></ul><ul><li>The posttest-only randomized experimental design allows the researcher to assume that there are randomized assignment of the participants to the two groups will ensure similar patient distribution. </li></ul>Experimental design
  54. 54. <ul><li>These designs assess the effectiveness of intervention when compared to a lack of alternative intervention. </li></ul><ul><li>That is the C group did not get any intervention. </li></ul><ul><li>R X1 O </li></ul><ul><li>R X2 O </li></ul>Experimental design
  55. 55. <ul><li>Inclusion of additional groups in the design may be necessary in order to rule out specific threats to validity. </li></ul><ul><li>The researcher may be inclined to add an additional nonequivalent group from a similar institution. </li></ul><ul><li>R O X O </li></ul><ul><li>R O O </li></ul><ul><li>N O O </li></ul>Experimental design
  56. 56. <ul><li>Another possibility is to use pre-post cohort groups: </li></ul><ul><li>N O X O </li></ul><ul><li>N O </li></ul><ul><li> N O </li></ul><ul><li>The treatment group consists of current learners, the first comparison group last year’s learners assessed in the same year, and the second comparison group consists of the following year's learners. </li></ul>Experimental design
  57. 57. <ul><li>The Nature of Good Design </li></ul><ul><li>Be linked with theory (know your threats) </li></ul><ul><li>Be innovative </li></ul><ul><li>Be realistic </li></ul><ul><li>Be flexible </li></ul><ul><li>Do not over-design </li></ul><ul><li>Word of the ay: parsimonious </li></ul>Experimental design
  58. 58. Pause
  59. 59. You have performed a systematic review of literature
  60. 60. You have generated an answerable question
  61. 61. You have generated a
  62. 62. You have designed a “clean” experiment
  63. 63. What is the next step?
  64. 64. You need to be able to us e the stats
  65. 65. <ul><li>The differences are important, but more important is the variability (noise) </li></ul><ul><li>Control confounding </li></ul><ul><li>variables! </li></ul>Basic stats - variability
  66. 66. <ul><li>How do we measure that? </li></ul>
  67. 67. <ul><li>Independent-Samples T Test </li></ul><ul><li>This procedure compares means for two groups of cases </li></ul><ul><ul><ul><ul><ul><li>R X O </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>R X O </li></ul></ul></ul></ul></ul><ul><li>Example. Learners are randomly assigned to a new educational approach group and the rest follows standard curriculum. </li></ul>T-test
  68. 68. Standard New Skill T-test
  69. 69. Standard New Skill T-test
  70. 70. Standard New Skill T-test
  71. 71. <ul><li>Paired-Samples T Test </li></ul><ul><li>This procedure compares the means of two variables for a single group. The procedure computes the differences between values of the two variables for each case and tests whether the average differs from 0. </li></ul><ul><li>O X O </li></ul><ul><li>Example. Learners participate a new curriculum. </li></ul>T-test
  72. 72. Post Pre Skill O X O T-test
  73. 73. <ul><li>One-Sample T Test </li></ul><ul><li>This procedure tests whether the mean of a single variable differs from a specified constant. </li></ul><ul><li>Examples. A researcher might want to test whether the skill improvement for the group participating in a new curriculum differs from a known average. </li></ul>T-test
  74. 74. <ul><li>Limitations </li></ul><ul><li>Only 2 groups or value </li></ul><ul><li>Does not correct for number of tests </li></ul>T-test
  75. 75. <ul><li>The One-Way ANOVA </li></ul><ul><li>This technique is an extension of the independent-sample t test. </li></ul><ul><li>In addition to determining that differences exist among the means, you may want to know which means differ. Post hoc tests are run after the experiment has been conducted to test for trends across categories. </li></ul>ANOVA
  76. 76. <ul><li>Example. Learners are participating in 2 new programs. You want to know which one is better, but you are also interested to know how does the standard curriculum compare. </li></ul><ul><li>Independent variable: </li></ul><ul><li>Program (New A, New B and Standard) </li></ul><ul><li>NOTE: This is also known as between group or between subject variable </li></ul><ul><li>Dependent variable: </li></ul><ul><li>Skill </li></ul>ANOVA
  77. 77. Standard Skill New A New B ANOVA
  78. 78. <ul><li>Limitation </li></ul><ul><li>Allows for only one independent variable </li></ul><ul><li>Q: What if one of the two programs is an eLearning module and the other is f2f? You may suspect that the cyber natives and cyber shy people may have different benefits. </li></ul>ANOVA
  79. 79. <ul><li>Multifactor ANOVA </li></ul><ul><li>Using this procedure, you can test for the effects of individual factors. </li></ul><ul><li>You can also investigate interactions between factors. </li></ul><ul><li>After an overall test has shown significance, you can use post hoc tests to evaluate differences among specific means. </li></ul>ANOVA
  80. 80. Cyber shy Cyber native Skill ANOVA
  81. 81. Standard Skill New A New B ANOVA
  82. 82. Standard Skill New A New A New B New B ANOVA
  83. 83. <ul><li>Each of these tests has a non-parametric equivalent. </li></ul><ul><li>Why? When? </li></ul><ul><li>If 2 assumptions are violated: </li></ul><ul><li>1. The data does not follow normal distribution. </li></ul><ul><li>2. N<15 </li></ul>Non-parametric test
  84. 84. You have performed a systematic review of literature
  85. 85. You have generated an answerable question
  86. 86. You have generated a
  87. 87. You have designed a “clean” experiment
  88. 88. What is the next step?
  89. 89. You have Been introdu ced to stats

×