Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Multiple Linear Regression II and ANOVA I

17,015 views

Published on

Explains advanced use of multiple linear regression, including residuals, interactions and analysis of change, then introduces the principles of ANOVA starting with explanation of t-tests.

Published in: Education, Technology

Multiple Linear Regression II and ANOVA I

  1. 1. Lecture 9 Survey Research & Design in Psychology James Neill, 2010 Multiple Linear Regression II & Analysis of Variance I
  2. 2. Overview <ul><li>Multiple Linear Regression II
  3. 3. Analysis of Variance I </li></ul>
  4. 4. Multiple Linear Regression II <ul><li>Summary of MLR I
  5. 5. Partial correlations
  6. 6. Residual analysis
  7. 7. Interactions
  8. 8. Analysis of change </li></ul>
  9. 9. Summary of MLR I <ul><li>Check assumptions – LOM, N , Normality, Linearity, Homoscedasticity, Collinearity, MVOs, Residuals
  10. 10. Choose type – Standard, Hierarchical, Stepwise, Forward, Backward
  11. 11. Interpret – Overall ( R 2 , Changes in R 2 (if hierarchical)), Coefficients (Standardised & unstandardised), Partial correlations
  12. 12. Equation – If useful (e.g., is the study predictive?) </li></ul>
  13. 13. Partial correlations ( r p ) r p between X and Y after controlling for (partialling out) the influence of a 3rd variable from both X and Y. Example Does years of marriage predict (IV) marital satisfaction (DV) after number of children is controlled for?
  14. 14. Partial correlations ( r p ) in MLR <ul><li>When interpreting MLR coefficients, compare the 0-order and partial correlations for each IV in each model -> draw a Venn diagram
  15. 15. Partial correlations will be equal to or smaller than the 0-order correlations
  16. 16. If a partial correlation is the same as the 0-order correlation, then the IV operates independently on the DV. </li></ul>
  17. 17. Partial correlations ( r p ) in MLR <ul><li>To the extent that a partial correlation is smaller than the 0-order correlation, then the IV's explanation of the DV is shared with other IVs.
  18. 18. An IV may have a sig. 0-order correlation with the DV, but a non-sig. partial correlation. This would indicate that there is non-sig. unique variance explained by the IV. </li></ul>
  19. 19. Semi-partial correlations ( sr 2 ) in MLR <ul><li>The sr 2 indicate the %s of variance in the DV which are uniquely explained by each IV .
  20. 20. In PASW, the sr s are labelled “part”. You need to square these to get sr 2 .
  21. 21. For more info, see Allen and Bennett (2008) p. 182 </li></ul>
  22. 22. Part & partial correlations in PASW In Linear Regression - Statistics dialog box, check “Part and partial correlations”
  23. 23. Residual analysis
  24. 24. Residual analysis Three key assumptions can be tested using plots of residuals: <ul><li>Linearity : IVs are linearly related to DV
  25. 25. Normality of residuals
  26. 26. Equal variances (Homoscedasticity) </li></ul>
  27. 27. Residual analysis Assumptions about residuals: <ul><li>Random noise
  28. 28. Sometimes positive, sometimes negative but, on average, 0
  29. 29. Normally distributed about 0 </li></ul>
  30. 30. Residual analysis
  31. 31. Residual analysis
  32. 32. Residual analysis “ The Normal P-P (Probability) Plot of Regression Standardized Residuals can be used to assess the assumption of normally distributed residuals. If the points cluster reasonably tightly along the diagonal line (as they do here), the residuals are normally distributed. Substantial deviations from the diagonal may be cause for concern.” Allen & Bennett, 2008, p. 183
  33. 33. Residual analysis
  34. 34. Residual analysis “ The Scatterplot of standardised residuals against standardised predicted values can be used to assess the assumptions of normality, linearity and homoscedasticity of residuals. The absence of any clear patterns in the spread of points indicates that these assumptions are met.” Allen & Bennett, 2008, p. 183
  35. 35. Why the big fuss about residuals?  assumption violation   Type I error rate (i.e., more false positives)
  36. 36. Why the big fuss about residuals? <ul><li>Standard error formulae (which are used for confidence intervals and sig. tests) work when residuals are well-behaved.
  37. 37. If the residuals don’t meet assumptions these formulae tend to underestimate coefficient standard errors giving overly optimistic p -values and too narrow CIs . </li></ul>
  38. 38. Interactions
  39. 39. Interactions <ul><li>Additivity refers to the assumption that the IVs act independently, i.e., they do not interact.
  40. 40. However, there may also be interaction effects - when the magnitude of the effect of one IV on a DV varies as a function of a second IV.
  41. 41. Also known as a moderation effect . </li></ul>
  42. 42. Interactions Some drugs interact with each other to reduce or enhance other's effects e.g., Pseudoephedrine   Arousal Caffeine   Arousal Pseudoeph. X Caffeine   Arousal X
  43. 43. Interactions Physical exercise in natural environments may provide multiplicative benefits in reducing stress e.g., Natural environment   Stress Physical exercise   Stress Natural env. X Phys. ex.   Stress
  44. 44. Interactions <ul><li>Model interactions by creating cross-product term IVs, e.g.,: </li><ul><li>Pseudoephedrine
  45. 45. Caffeine
  46. 46. Pseudoephedrine x Caffeine (cross-product) </li></ul><li>Compute a cross-product term, e.g.: </li><ul><li>Compute PseudoCaffeine = Pseudo*Caffeine. </li></ul></ul>
  47. 47. Interactions Y = b 1 x 1 + b 2 x 2 + b 12 x 12 + a + e <ul><li>b 12 is the product of the first two slopes ( b 1 x b 2 )
  48. 48. b 12 can be interpreted as the amount of change in the slope of the regression of Y on b 1 when b 2 changes by one unit. </li></ul>
  49. 49. Interactions <ul><li>Conduct Hierarchical MLR
  50. 50. Step 1 : </li><ul><li>Pseudoephedrine
  51. 51. Caffeine </li></ul><li>Step 2 : </li><ul><li>Pseudo x Caffeine (cross-product) </li></ul><li>Examine  R 2 , to see whether the interaction term explains additional variance above and beyond the direct effects of Pseudo and Caffeine. </li></ul>
  52. 52. Interactions Possible effects of Pseudo and Caffeine on Arousal: <ul><li>None
  53. 53. Pseudo only (incr./decr.)
  54. 54. Caffeine only (incr./decr.)
  55. 55. Pseudo + Caffeine (additive inc./dec.)
  56. 56. Pseudo x Caffeine (synergistic inc./dec.)
  57. 57. Pseudo x Caffeine (antagonistic inc./dec.) </li></ul>
  58. 58. Interactions <ul><li>Cross-product interaction terms may be highly correlated (multicollinear) with the corresponding simple IVs, creating problems with assessing the relative importance of main effects and interaction effects.
  59. 59. An alternative approach is to run separate regressions for each level of the interacting variable . </li></ul>
  60. 60. Analysis of Change
  61. 61. Analysis of change Example research question: In group-based mental health interventions, does the quality of social support from group members (IV1) and group leaders (IV2) explain changes in participants’ mental health between the beginning and end of the intervention (DV)?
  62. 62. Analysis of change Hierarchical MLR <ul><li>DV = Mental health after the intervention </li></ul>Step 1 <ul><li>IV1 = Mental health before the intervention </li></ul>Step 2 <ul><li>IV2 = Support from group members
  63. 63. IV3 = Support from group leader </li></ul>
  64. 64. Analysis of change Strategy : Use hierarchical MLR to “partial out” pre-intervention individual differences from the DV, leaving only the variances of the changes in the DV b/w pre- and post-intervention for analysis in Step 2.
  65. 65. Analysis of change Results of interest <ul><li>Change in R 2 – how much variance in change scores is explained by the predictors
  66. 66. Regression coefficients for predictors in step 2 </li></ul>
  67. 67. Summary (MLR II) <ul><li>Partial correlation </li></ul>Unique variance explained by IVs; calculate and report sr 2 . <ul><li>Residual analysis </li></ul>A way to test key assumptions.
  68. 68. Summary (MLR II) <ul><li>Interactions </li></ul>A way to model (rather than ignore) interactions between IVs. <ul><li>Analysis of change </li></ul>Use hierarchical MLR to “partial out” baseline scores in Step 1 in order to use IVs in Step 2 to predict changes over time.
  69. 69. Student questions ?
  70. 70. ANOVA I Analysing differences t -tests <ul><li>One sample
  71. 71. Independent
  72. 72. Paired </li></ul>
  73. 73. Howell (2010): <ul><li>Ch3 The Normal Distribution
  74. 74. Ch4 Sampling Distributions and Hypothesis Testing
  75. 75. Ch7 Hypothesis Tests Applied to Means </li></ul>Readings
  76. 76. Analysing differences <ul><li>Correlations vs. differences
  77. 77. Which difference test?
  78. 78. Parametric vs. non-parametric </li></ul>
  79. 79. Correlational vs difference statistics <ul><li>Correlation and regression techniques reflect the strength of association
  80. 80. Tests of differences reflect differences in central tendency of variables between groups and measures. </li></ul>
  81. 81. Correlational vs difference statistics <ul><li>In MLR we see the world as made of covariation. Everywhere we look, we see relationships .
  82. 82. In ANOVA we see the world as made of differences. Everywhere we look we see differences . </li></ul>
  83. 83. Correlational vs difference statistics <ul><li>LR/MLR e.g., What is the relationship between gender and height in humans?
  84. 84. t -test/ANOVA e.g., What is the difference between the heights of human males and females? </li></ul>
  85. 85. Are the differences in a sample generalisable to a population?
  86. 86. Which difference test? (2 groups) How many groups? (i.e. categories of IV) More than 2 groups = ANOVA models 2 groups: Are the groups independent or dependent? Independent groups Dependent groups 1 group = one-sample t -test Para DV = Independent samples t-test Para DV = Paired samples t -test Non-para DV = Mann-Whitney U Non-para DV = Wilcoxon
  87. 87. Parametric vs. non-parametric statistics Parametric statistics – inferential test that assumes certain characteristics are true of an underlying population, especially the shape of its distribution. Non-parametric statistics – inferential test that makes few or no assumptions about the population from which observations were drawn (distribution-free tests).
  88. 88. Parametric vs. non-parametric statistics <ul><li>There is generally at least one non-parametric equivalent test for each type of parametric test.
  89. 89. Non-parametric tests are generally used when assumptions about the underlying population are questionable (e.g., non-normality). </li></ul>
  90. 90. <ul><li>Parametric statistics commonly used for normally distributed interval or ratio dependent variables.
  91. 91. Non-parametric statistics can be used to analyse DVs that are non-normal or are nominal or ordinal.
  92. 92. Non-parametric statistics are less powerful that parametric tests. </li></ul>Parametric vs. non-parametric statistics
  93. 93. So, when do I use a non-parametric test? Consider non-parametric tests when (any of the following): <ul><li>Assumptions, like normality, have been violated.
  94. 94. Small number of observations ( N ).
  95. 95. DVs have nominal or ordinal levels of measurement. </li></ul>
  96. 96. Some Commonly Used Parametric & Nonparametric Tests Compares groups classified by two different factors Friedman;  2 test of independence 2-way ANOVA Compares three or more groups Kruskal-Wallis 1-way ANOVA Compares two related samples Wilcoxon matched pairs signed-rank t test (paired) Compares two independent samples Mann-Whitney U; Wilcoxon rank-sum t test (independent) Purpose Non-parametric Parametric Some commonly used parametric & non-parametric tests
  97. 97. t -tests <ul><li>t -tests
  98. 98. One-sample t -tests
  99. 99. Independent sample t -tests
  100. 100. Paired sample t -tests </li></ul>
  101. 101. Why a t -test or ANOVA? <ul><li>A t -test or ANOVA is used to determine whether a sample of scores are from the same population as another sample of scores.
  102. 102. These are inferential tools for examining differences between group means.
  103. 103. Is the difference between two sample means ‘real’ or due to chance? </li></ul>
  104. 104. t -tests <ul><li>One-sample One group of participants, compared with fixed, pre-existing value (e.g., population norms)
  105. 105. Independent Compares mean scores on the same variable across different populations (groups)
  106. 106. Paired Same participants, with repeated measures </li></ul>
  107. 107. Major assumptions <ul><li>Normally distributed variables
  108. 108. Homogeneity of variance </li></ul>In general, t -tests and ANOVAs are robust to violation of assumptions, particularly with large cell sizes, but don't be complacent .
  109. 109. Use of t in t -tests <ul><li>t reflects the ratio of between group variance to within group variance
  110. 110. Is the t large enough that it is unlikely that the two samples have come from the same population?
  111. 111. Decision: Is t larger than the critical value for t ? (see t tables – depends on critical  and N ) </li></ul>
  112. 112. Ye good ol’ normal distribution 68% 95% 99.7%
  113. 113. One-tail vs. two-tail tests <ul><li>Two-tailed test rejects null hypothesis if obtained t -value is extreme is either direction
  114. 114. One-tailed test rejects null hypothesis if obtained t -value is extreme is one direction (you choose – too high or too low)
  115. 115. One-tailed tests are twice as powerful as two-tailed, but they are only focused on identifying differences in one direction. </li></ul>
  116. 116. One sample t -test <ul><li>Compare one group (a sample) with a fixed, pre-existing value (e.g., population norms) </li></ul>Do uni students sleep less than the recommended amount? e.g., Given a sample of N = 50 uni students who sleep M = 6.5 hrs/day ( SD = 1.3), does this differ significantly from 8 hours hrs/day (  = .05)?
  117. 117. Independent groups t -test Compares mean scores on the same variable across different populations (groups) <ul><li>Do males & females differ in IQ?
  118. 118. Do Americans vs. Non-Americans differ in their approval of Barack Obama? </li></ul>
  119. 119. Assumptions (Indep. samples t -test) <ul><li>LOM </li><ul><li>IV is ordinal / categorical
  120. 120. DV is interval / ratio </li></ul><li>Homogeneity of Variance : If variances unequal (Levene’s test), adjustment made
  121. 121. Normality : t -tests robust to modest departures from normality, otherwise consider use of Mann-Whitney U test
  122. 122. Independence of observations (one participant’s score is not dependent on any other participant’s score) </li></ul>
  123. 123. Do males and females differ in memory recall?
  124. 124. Adolescents' Same Sex Relations in Single Sex vs. Co-Ed Schools
  125. 125. Adolescents' Opposite Sex Relations in Single Sex vs. Co-Ed Schools
  126. 126. Independent samples t -test  1-way ANOVA <ul><li>Comparison b/w means of 2 independent sample variables = t -test (e.g., what is the difference in Educational Satisfaction between male and female students?)
  127. 127. Comparison b/w means of 3+ independent sample variables = 1-way ANOVA (e.g., what is the difference in Educational Satisfaction between students enrolled in four different faculties?) </li></ul>
  128. 128. Paired samples t -test <ul><li>Same participants, with repeated measures
  129. 129. Data is sampled within subjects </li></ul><ul><li>Pre- vs. post- treatment ratings
  130. 130. Different factors e.g., Voter’s approval ratings of candidates X and Y </li></ul>
  131. 131. Assumptions (Paired samples t -test) <ul><li>LOM : </li><ul><li>IV: Two measures from same participants (w/in subjects) </li><ul><li>a variable measured on two occasions or
  132. 132. two different variables measured on the same occasion </li></ul><li>DV: Continuous (Interval or ratio) </li></ul><li>Normal distribution of difference scores (robust to violation with larger samples)
  133. 133. Independence of observations (one participant’s score is not dependent on another’s score) </li></ul>
  134. 134. Does females memory recall change over time?
  135. 135. Adolescents' Opposite Sex vs. Same Sex Relations
  136. 136. Paired samples t -test  1-way repeated measures ANOVA <ul><li>Comparison b/w means of 2 within subject variables = t -test
  137. 137. Comparison b/w means of 3+ within subject variables = 1-way ANOVA (e.g., what is the difference in Campus, Social, and Education Satisfaction?) </li></ul>
  138. 138. Summary (Analysing Differences) <ul><li>Non-parametric and parametric tests can be used for examining differences between the central tendency of two of more variables
  139. 139. Develop a conceptualisation of when to each of the parametric tests from one-sample t -test through to MANOVA (e.g. decision chart). </li></ul>
  140. 140. Summary (Analysing Differences) <ul><li>t -tests </li><ul><li>One-sample
  141. 141. Independent-samples
  142. 142. Paired samples </li></ul></ul>
  143. 143. What will be covered in ANOVA II? <ul><li>1-way ANOVA
  144. 144. 1-way repeated measures ANOVA
  145. 145. Factorial ANOVA
  146. 146. Mixed design ANOVA
  147. 147. ANCOVA
  148. 148. MANOVA
  149. 149. Repeated measures MANOVA </li></ul>
  150. 150. References <ul><li>Allen, P. & Bennett, K. (2008). SPSS for the health and behavioural sciences . South Melbourne, Victoria, Australia: Thomson.
  151. 151. Francis, G. (2007). Introduction to SPSS for Windows: v. 15.0 and 14.0 with Notes for Studentware (5th ed.). Sydney: Pearson Education.
  152. 152. Howell, D. C. (2010). Statistical methods for psychology (7th ed.). Belmont, CA: Wadsworth. </li></ul>
  153. 153. Open Office Impress <ul><li>This presentation was made using Open Office Impress.
  154. 154. Free and open source software.
  155. 155. http://www.openoffice.org/product/impress.html </li></ul>

×