Multiple Linear Regression II and ANOVA I
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Multiple Linear Regression II and ANOVA I

on

  • 18,253 views

Explains advanced use of multiple linear regression, including residuals, interactions and analysis of change, then introduces the principles of ANOVA starting with explanation of t-tests.

Explains advanced use of multiple linear regression, including residuals, interactions and analysis of change, then introduces the principles of ANOVA starting with explanation of t-tests.

Statistics

Views

Total Views
18,253
Views on SlideShare
17,152
Embed Views
1,101

Actions

Likes
8
Downloads
999
Comments
2

6 Embeds 1,101

http://ucspace.canberra.edu.au 1028
http://www.slideshare.net 67
http://webcache.googleusercontent.com 2
http://65.55.177.205 2
http://b2training.blackboard.com 1
http://translate.googleusercontent.com 1

Accessibility

Categories

Upload Details

Uploaded via as OpenOffice

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Assoc Prof. NIMS University, Jaipur
    The approach and content is appreciable.
    Are you sure you want to
    Your message goes here
    Processing…
  • See also Multiple Linear Regression II and ANOVA I - http://ucspace.canberra.edu.au/display/7126/Lecture+-+Multiple+Linear+Regression+II+and+ANOVA+I
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • 7126/6667 Survey Research & Design in Psychology Semester 1, 2009, University of Canberra, ACT, Australia James T. Neill Home page: http://ucspace.canberra.edu.au/display/7126 Lecture page: http://en.wikiversity.org/wiki/Survey_research_methods_and_design_in_psychology http://ucspace.canberra.edu.au/pages/viewpage.action?pageId=48398957 Image sources http://commons.wikimedia.org/wiki/File:Vidrarias_de_Laboratorio.jpg License: Public domain http://commons.wikimedia.org/wiki/File:3D_Bar_Graph_Meeting.jpg ' License: CC-by-A 2.0 Author: lumaxart ( http://www.flickr.com/photos/lumaxart/ ) Description: Explains advanced use of multiple linear regression, including residuals, interactions and analysis of change, then introduces the principles of ANOVA starting with explanation of t-tests. This lecture is accompanied by two computer-lab based tutorial, the notes for which are available here: http://ucspace.canberra.edu.au/display/7126/Tutorial+-+Multiple+linear+regression http://ucspace.canberra.edu.au/display/7126/Tutorial+-+ANOVA
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain
  • These residual slides are based on Francis (2007) – MLR (Section 5.1.4) – Practical Issues & Assumptions, pp. 126-127 and Allen and Bennett (2008) Note that according to Francis, residual analysis can test: Additivity (i.e., no interactions b/w Ivs) (but this has been left out for the sake of simplicity)
  • Image source: http://www.gseis.ucla.edu/courses/ed230bc1/notes1/con1.html If IVs are correlated then you should also examine the difference between the zero-order and partial correlations.
  • Image source: http://www.gseis.ucla.edu/courses/ed230bc1/notes1/con1.html If IVs are correlated then you should also examine the difference between the zero-order and partial correlations.
  • Image source: Unknown Residuals are the distance between the predicted and actual scores. Standardised residuals (subtract mean, divide by standard deviation).
  • These residual slides are based on Francis (2007) – MLR (Section 5.1.4) – Practical Issues & Assumptions, pp. 126-127 and Allen and Bennett (2008) Note that according to Francis, residual analysis can test: Additivity (i.e., no interactions b/w Ivs) (but this has been left out for the sake of simplicity)
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ Histogram of the residuals – they should be approximately normally distributed. This plot is very slightly positively skewed – we are only concerned about gross violations.
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ Normal probability plot – of regression standardised residuals. Shows the same information as previous slide – reasonable normality of residuals.
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ Histogram compared to normal probability plot. Variations from normality can be seen on both plots.
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ A plot of Predicted values (ZPRED) by Residuals (ZRESID). This should show a broad, horizontal band of points (it does). Any ‘fanning out’ of the residuals indicates a violation of the homoscedasticity assumption, and any pattern in the plot indicates a violation of linearity.
  • Image source: http://commons.wikimedia.org/wiki/File:Color_icon_orange.png Image license: Public domain Image author: User:Booyabazooka, http://en.wikipedia.org/wiki/User:Booyabazooka
  • Interactions occur potentially in situations involving univariate analysis of variance and covariance (ANOVA and ANCOVA), multivariate analysis of variance and covariance (MANOVA and MANCOVA), multiple linear regression (MLR), logistic regression, path analysis, and covariance structure modeling. ANOVA and ANCOVA models are special cases of MLR in which one or more predictors are nominal or ordinal "factors.“ Interaction effects are sometimes called moderator effects because the interacting third variable which changes the relation between two original variables is a moderator variable which moderates the original relationship. For instance, the relation between income and conservatism may be moderated depending on the level of education.
  • Image source: http://www.flickr.com/photos/comedynose/3491192647/ License: CC-by-A 2.0, http://creativecommons.org/licenses/by/2.0/deed.en Author: comedynose, http://www.flickr.com/photos/comedynose/ Image source: http://www.flickr.com/photos/teo/2068161/ License: CC-by-SA 2.0, http://creativecommons.org/licenses/by-sa/2.0/deed.en Author: Teo, http://www.flickr.com/photos/teo/
  • Image source: http://www.flickr.com/photos/hamed/258971456/ License: CC-by-A 2.0 Author: Hamed Saber, http://www.flickr.com/photos/hamed/
  • Example hypotheses: Income has a direct positive influence on Conservatism Education has a direct negative influence on Conservatism  Income combined with  Education may have a very -ve effect on Conservatism above and beyond that predicted by the direct effects – i.e., there is an interaction b/w Income and Education
  • Likewise, power terms (e.g., x squared) can be added as independent variables to explore curvilinear effects.
  • For example, conduct a separate regression for males and females. Advanced notes: It may be desirable to use centered variables (where one has subtracted the mean from each datum) -- a transformation which often reduces multicollinearity. Note also that there are alternatives to the crossproduct approach to analysing interactions: http://www2.chass.ncsu.edu/garson/PA765/regress.htm#interact
  • Image source: http://commons.wikimedia.org/wiki/File:PMinspirert.jpg Image license: CC-by-SA, http://creativecommons.org/licenses/by-sa/3.0/ and GFDL, http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image author: Bjørn som tegner , http://commons.wikimedia.org/wiki/User:Bj%C3%B8rn_som_tegner
  • Step 1 MH1 should be a highly significant predictor. The left over variance represents the change in MH b/w Time 1 and 2 (plus error). Step 2 If IV2 and IV3 are significant predictors, then they help to predict changes in MH.
  • Step 1 MH1 should be a highly significant predictor. The left over variance represents the change in MH b/w Time 1 and 2 (plus error). Step 2 If IV2 and IV3 are significant predictors, then they help to predict changes in MH.
  • Image sources: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain http://commons.wikimedia.org/wiki/File:3D_Bar_Graph_Meeting.jpg ' License: CC-by-A 2.0 Author: lumaxart ( http://www.flickr.com/photos/lumaxart/ )
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain
  • Correlation/regression techniques reflect the strength of association between continuous variables Tests of group differences ( t -tests, ANOVA) indicate whether significant differences exist between group means
  • In MLR we see world is made of covariation. In ANOVA we see the world as made of differences. In MLR, everywhere we look, we see patterns and relationships. In ANOVA we view everything as having the same, more or less than other things.
  • Image soruce: Unknown
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ “ A t-test is used to determine whether a set or sets of scores are from the same population.” - Coakes & Steed (1999), p.61
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ Adapted from: http://www.tufts.edu/~gdallal/npar.htm
  • Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain
  • “ A t-test is used to determine whether a set or sets of scores are from the same population.” - Coakes & Steed (1999), p.61
  • Image soruce: Unknown
  • Image source: Unknown
  • Also called: One sample t -test
  • Adapted from slide 23 of Howell Ch12 Powerpoint: IV is ordinal / categorical e.g., gender DV is interval / ratio e.g., self-esteem Homogeneity of Variance If variances unequal (Levene’s test), adjustment made Normality t-tests robust to modest departures from normality (often violated without consequences) look at histograms, skewness, & kurtosis; consider use of Mann-Whitney U test if departure from normality is severe, particularly if sample size is small (e.g., < 50) Independence of observations (one participant’s score is not dependent on any other participant’s score)
  • Image source: S;ode 23 pf Howell Ch12 Powerpoint Independent samples t -test
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
  • Also called related samples t-test or repeated measures t-test
  • Also known as: Related samples t -test and Repeated measures t -test How much do you like the colour red? How much do you like the colour blue? Is there a difference between people’s liking of red and blue?
  • Image source:S slide 23 of Howell Ch12 Powerpoint
  • Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/

Multiple Linear Regression II and ANOVA I Presentation Transcript

  • 1. Lecture 9 Survey Research & Design in Psychology James Neill, 2010 Multiple Linear Regression II & Analysis of Variance I
  • 2. Overview
    • Multiple Linear Regression II
    • 3. Analysis of Variance I
  • 4. Multiple Linear Regression II
    • Summary of MLR I
    • 5. Partial correlations
    • 6. Residual analysis
    • 7. Interactions
    • 8. Analysis of change
  • 9. Summary of MLR I
    • Check assumptions – LOM, N , Normality, Linearity, Homoscedasticity, Collinearity, MVOs, Residuals
    • 10. Choose type – Standard, Hierarchical, Stepwise, Forward, Backward
    • 11. Interpret – Overall ( R 2 , Changes in R 2 (if hierarchical)), Coefficients (Standardised & unstandardised), Partial correlations
    • 12. Equation – If useful (e.g., is the study predictive?)
  • 13. Partial correlations ( r p ) r p between X and Y after controlling for (partialling out) the influence of a 3rd variable from both X and Y. Example Does years of marriage predict (IV) marital satisfaction (DV) after number of children is controlled for?
  • 14. Partial correlations ( r p ) in MLR
    • When interpreting MLR coefficients, compare the 0-order and partial correlations for each IV in each model -> draw a Venn diagram
    • 15. Partial correlations will be equal to or smaller than the 0-order correlations
    • 16. If a partial correlation is the same as the 0-order correlation, then the IV operates independently on the DV.
  • 17. Partial correlations ( r p ) in MLR
    • To the extent that a partial correlation is smaller than the 0-order correlation, then the IV's explanation of the DV is shared with other IVs.
    • 18. An IV may have a sig. 0-order correlation with the DV, but a non-sig. partial correlation. This would indicate that there is non-sig. unique variance explained by the IV.
  • 19. Semi-partial correlations ( sr 2 ) in MLR
    • The sr 2 indicate the %s of variance in the DV which are uniquely explained by each IV .
    • 20. In PASW, the sr s are labelled “part”. You need to square these to get sr 2 .
    • 21. For more info, see Allen and Bennett (2008) p. 182
  • 22. Part & partial correlations in PASW In Linear Regression - Statistics dialog box, check “Part and partial correlations”
  • 23. Residual analysis
  • 24. Residual analysis Three key assumptions can be tested using plots of residuals:
    • Linearity : IVs are linearly related to DV
    • 25. Normality of residuals
    • 26. Equal variances (Homoscedasticity)
  • 27. Residual analysis Assumptions about residuals:
    • Random noise
    • 28. Sometimes positive, sometimes negative but, on average, 0
    • 29. Normally distributed about 0
  • 30. Residual analysis
  • 31. Residual analysis
  • 32. Residual analysis “ The Normal P-P (Probability) Plot of Regression Standardized Residuals can be used to assess the assumption of normally distributed residuals. If the points cluster reasonably tightly along the diagonal line (as they do here), the residuals are normally distributed. Substantial deviations from the diagonal may be cause for concern.” Allen & Bennett, 2008, p. 183
  • 33. Residual analysis
  • 34. Residual analysis “ The Scatterplot of standardised residuals against standardised predicted values can be used to assess the assumptions of normality, linearity and homoscedasticity of residuals. The absence of any clear patterns in the spread of points indicates that these assumptions are met.” Allen & Bennett, 2008, p. 183
  • 35. Why the big fuss about residuals?  assumption violation   Type I error rate (i.e., more false positives)
  • 36. Why the big fuss about residuals?
    • Standard error formulae (which are used for confidence intervals and sig. tests) work when residuals are well-behaved.
    • 37. If the residuals don’t meet assumptions these formulae tend to underestimate coefficient standard errors giving overly optimistic p -values and too narrow CIs .
  • 38. Interactions
  • 39. Interactions
    • Additivity refers to the assumption that the IVs act independently, i.e., they do not interact.
    • 40. However, there may also be interaction effects - when the magnitude of the effect of one IV on a DV varies as a function of a second IV.
    • 41. Also known as a moderation effect .
  • 42. Interactions Some drugs interact with each other to reduce or enhance other's effects e.g., Pseudoephedrine   Arousal Caffeine   Arousal Pseudoeph. X Caffeine   Arousal X
  • 43. Interactions Physical exercise in natural environments may provide multiplicative benefits in reducing stress e.g., Natural environment   Stress Physical exercise   Stress Natural env. X Phys. ex.   Stress
  • 44. Interactions
    • Model interactions by creating cross-product term IVs, e.g.,:
      • Pseudoephedrine
      • 45. Caffeine
      • 46. Pseudoephedrine x Caffeine (cross-product)
    • Compute a cross-product term, e.g.:
      • Compute PseudoCaffeine = Pseudo*Caffeine.
  • 47. Interactions Y = b 1 x 1 + b 2 x 2 + b 12 x 12 + a + e
    • b 12 is the product of the first two slopes ( b 1 x b 2 )
    • 48. b 12 can be interpreted as the amount of change in the slope of the regression of Y on b 1 when b 2 changes by one unit.
  • 49. Interactions
    • Conduct Hierarchical MLR
    • 50. Step 1 :
      • Pseudoephedrine
      • 51. Caffeine
    • Step 2 :
      • Pseudo x Caffeine (cross-product)
    • Examine  R 2 , to see whether the interaction term explains additional variance above and beyond the direct effects of Pseudo and Caffeine.
  • 52. Interactions Possible effects of Pseudo and Caffeine on Arousal:
    • None
    • 53. Pseudo only (incr./decr.)
    • 54. Caffeine only (incr./decr.)
    • 55. Pseudo + Caffeine (additive inc./dec.)
    • 56. Pseudo x Caffeine (synergistic inc./dec.)
    • 57. Pseudo x Caffeine (antagonistic inc./dec.)
  • 58. Interactions
    • Cross-product interaction terms may be highly correlated (multicollinear) with the corresponding simple IVs, creating problems with assessing the relative importance of main effects and interaction effects.
    • 59. An alternative approach is to run separate regressions for each level of the interacting variable .
  • 60. Analysis of Change
  • 61. Analysis of change Example research question: In group-based mental health interventions, does the quality of social support from group members (IV1) and group leaders (IV2) explain changes in participants’ mental health between the beginning and end of the intervention (DV)?
  • 62. Analysis of change Hierarchical MLR
    • DV = Mental health after the intervention
    Step 1
    • IV1 = Mental health before the intervention
    Step 2
    • IV2 = Support from group members
    • 63. IV3 = Support from group leader
  • 64. Analysis of change Strategy : Use hierarchical MLR to “partial out” pre-intervention individual differences from the DV, leaving only the variances of the changes in the DV b/w pre- and post-intervention for analysis in Step 2.
  • 65. Analysis of change Results of interest
    • Change in R 2 – how much variance in change scores is explained by the predictors
    • 66. Regression coefficients for predictors in step 2
  • 67. Summary (MLR II)
    • Partial correlation
    Unique variance explained by IVs; calculate and report sr 2 .
    • Residual analysis
    A way to test key assumptions.
  • 68. Summary (MLR II)
    • Interactions
    A way to model (rather than ignore) interactions between IVs.
    • Analysis of change
    Use hierarchical MLR to “partial out” baseline scores in Step 1 in order to use IVs in Step 2 to predict changes over time.
  • 69. Student questions ?
  • 70. ANOVA I Analysing differences t -tests
    • One sample
    • 71. Independent
    • 72. Paired
  • 73. Howell (2010):
    • Ch3 The Normal Distribution
    • 74. Ch4 Sampling Distributions and Hypothesis Testing
    • 75. Ch7 Hypothesis Tests Applied to Means
    Readings
  • 76. Analysing differences
    • Correlations vs. differences
    • 77. Which difference test?
    • 78. Parametric vs. non-parametric
  • 79. Correlational vs difference statistics
    • Correlation and regression techniques reflect the strength of association
    • 80. Tests of differences reflect differences in central tendency of variables between groups and measures.
  • 81. Correlational vs difference statistics
    • In MLR we see the world as made of covariation. Everywhere we look, we see relationships .
    • 82. In ANOVA we see the world as made of differences. Everywhere we look we see differences .
  • 83. Correlational vs difference statistics
    • LR/MLR e.g., What is the relationship between gender and height in humans?
    • 84. t -test/ANOVA e.g., What is the difference between the heights of human males and females?
  • 85. Are the differences in a sample generalisable to a population?
  • 86. Which difference test? (2 groups) How many groups? (i.e. categories of IV) More than 2 groups = ANOVA models 2 groups: Are the groups independent or dependent? Independent groups Dependent groups 1 group = one-sample t -test Para DV = Independent samples t-test Para DV = Paired samples t -test Non-para DV = Mann-Whitney U Non-para DV = Wilcoxon
  • 87. Parametric vs. non-parametric statistics Parametric statistics – inferential test that assumes certain characteristics are true of an underlying population, especially the shape of its distribution. Non-parametric statistics – inferential test that makes few or no assumptions about the population from which observations were drawn (distribution-free tests).
  • 88. Parametric vs. non-parametric statistics
    • There is generally at least one non-parametric equivalent test for each type of parametric test.
    • 89. Non-parametric tests are generally used when assumptions about the underlying population are questionable (e.g., non-normality).
  • 90.
    • Parametric statistics commonly used for normally distributed interval or ratio dependent variables.
    • 91. Non-parametric statistics can be used to analyse DVs that are non-normal or are nominal or ordinal.
    • 92. Non-parametric statistics are less powerful that parametric tests.
    Parametric vs. non-parametric statistics
  • 93. So, when do I use a non-parametric test? Consider non-parametric tests when (any of the following):
    • Assumptions, like normality, have been violated.
    • 94. Small number of observations ( N ).
    • 95. DVs have nominal or ordinal levels of measurement.
  • 96. Some Commonly Used Parametric & Nonparametric Tests Compares groups classified by two different factors Friedman;  2 test of independence 2-way ANOVA Compares three or more groups Kruskal-Wallis 1-way ANOVA Compares two related samples Wilcoxon matched pairs signed-rank t test (paired) Compares two independent samples Mann-Whitney U; Wilcoxon rank-sum t test (independent) Purpose Non-parametric Parametric Some commonly used parametric & non-parametric tests
  • 97. t -tests
    • t -tests
    • 98. One-sample t -tests
    • 99. Independent sample t -tests
    • 100. Paired sample t -tests
  • 101. Why a t -test or ANOVA?
    • A t -test or ANOVA is used to determine whether a sample of scores are from the same population as another sample of scores.
    • 102. These are inferential tools for examining differences between group means.
    • 103. Is the difference between two sample means ‘real’ or due to chance?
  • 104. t -tests
    • One-sample One group of participants, compared with fixed, pre-existing value (e.g., population norms)
    • 105. Independent Compares mean scores on the same variable across different populations (groups)
    • 106. Paired Same participants, with repeated measures
  • 107. Major assumptions
    • Normally distributed variables
    • 108. Homogeneity of variance
    In general, t -tests and ANOVAs are robust to violation of assumptions, particularly with large cell sizes, but don't be complacent .
  • 109. Use of t in t -tests
    • t reflects the ratio of between group variance to within group variance
    • 110. Is the t large enough that it is unlikely that the two samples have come from the same population?
    • 111. Decision: Is t larger than the critical value for t ? (see t tables – depends on critical  and N )
  • 112. Ye good ol’ normal distribution 68% 95% 99.7%
  • 113. One-tail vs. two-tail tests
    • Two-tailed test rejects null hypothesis if obtained t -value is extreme is either direction
    • 114. One-tailed test rejects null hypothesis if obtained t -value is extreme is one direction (you choose – too high or too low)
    • 115. One-tailed tests are twice as powerful as two-tailed, but they are only focused on identifying differences in one direction.
  • 116. One sample t -test
    • Compare one group (a sample) with a fixed, pre-existing value (e.g., population norms)
    Do uni students sleep less than the recommended amount? e.g., Given a sample of N = 50 uni students who sleep M = 6.5 hrs/day ( SD = 1.3), does this differ significantly from 8 hours hrs/day (  = .05)?
  • 117. Independent groups t -test Compares mean scores on the same variable across different populations (groups)
    • Do males & females differ in IQ?
    • 118. Do Americans vs. Non-Americans differ in their approval of Barack Obama?
  • 119. Assumptions (Indep. samples t -test)
    • LOM
      • IV is ordinal / categorical
      • 120. DV is interval / ratio
    • Homogeneity of Variance : If variances unequal (Levene’s test), adjustment made
    • 121. Normality : t -tests robust to modest departures from normality, otherwise consider use of Mann-Whitney U test
    • 122. Independence of observations (one participant’s score is not dependent on any other participant’s score)
  • 123. Do males and females differ in memory recall?
  • 124. Adolescents' Same Sex Relations in Single Sex vs. Co-Ed Schools
  • 125. Adolescents' Opposite Sex Relations in Single Sex vs. Co-Ed Schools
  • 126. Independent samples t -test  1-way ANOVA
    • Comparison b/w means of 2 independent sample variables = t -test (e.g., what is the difference in Educational Satisfaction between male and female students?)
    • 127. Comparison b/w means of 3+ independent sample variables = 1-way ANOVA (e.g., what is the difference in Educational Satisfaction between students enrolled in four different faculties?)
  • 128. Paired samples t -test
    • Same participants, with repeated measures
    • 129. Data is sampled within subjects
    • Pre- vs. post- treatment ratings
    • 130. Different factors e.g., Voter’s approval ratings of candidates X and Y
  • 131. Assumptions (Paired samples t -test)
    • LOM :
      • IV: Two measures from same participants (w/in subjects)
        • a variable measured on two occasions or
        • 132. two different variables measured on the same occasion
      • DV: Continuous (Interval or ratio)
    • Normal distribution of difference scores (robust to violation with larger samples)
    • 133. Independence of observations (one participant’s score is not dependent on another’s score)
  • 134. Does females memory recall change over time?
  • 135. Adolescents' Opposite Sex vs. Same Sex Relations
  • 136. Paired samples t -test  1-way repeated measures ANOVA
    • Comparison b/w means of 2 within subject variables = t -test
    • 137. Comparison b/w means of 3+ within subject variables = 1-way ANOVA (e.g., what is the difference in Campus, Social, and Education Satisfaction?)
  • 138. Summary (Analysing Differences)
    • Non-parametric and parametric tests can be used for examining differences between the central tendency of two of more variables
    • 139. Develop a conceptualisation of when to each of the parametric tests from one-sample t -test through to MANOVA (e.g. decision chart).
  • 140. Summary (Analysing Differences)
    • t -tests
      • One-sample
      • 141. Independent-samples
      • 142. Paired samples
  • 143. What will be covered in ANOVA II?
    • 1-way ANOVA
    • 144. 1-way repeated measures ANOVA
    • 145. Factorial ANOVA
    • 146. Mixed design ANOVA
    • 147. ANCOVA
    • 148. MANOVA
    • 149. Repeated measures MANOVA
  • 150. References
    • Allen, P. & Bennett, K. (2008). SPSS for the health and behavioural sciences . South Melbourne, Victoria, Australia: Thomson.
    • 151. Francis, G. (2007). Introduction to SPSS for Windows: v. 15.0 and 14.0 with Notes for Studentware (5th ed.). Sydney: Pearson Education.
    • 152. Howell, D. C. (2010). Statistical methods for psychology (7th ed.). Belmont, CA: Wadsworth.
  • 153. Open Office Impress
    • This presentation was made using Open Office Impress.
    • 154. Free and open source software.
    • 155. http://www.openoffice.org/product/impress.html