2. Review of Statistics
•Levels of Measurement:
• Nominal
•Edit “values”
•Can be useful for labelling aside from
just entering labels
• Ordinal
• Interval
• Ratio
•Both interval and ratio levels are
considered as “Scale”
5. Editing Values (cont’d)
1. Type a number on the Value (you may
start with either 0 or 1
2. Put the labels
3. Do not forget to click “Add” prior to
adding a new one and finalizing the list of
categories
7. Common Graphs Used in
SPSS
• Scatterplot
• Correlation/regression
• Linearity Assumption
• Histogram
• Checking the normality assumption
• Continuous variable
• Bar Graph
• Categories/discrete
• Box plot
• Can be helpful in checking outliers
• Line Graph
• For factorial designs
8. Review of Statistical
Methods
Conditions Parametric Non-parametric
One sample z-test (known σ)
One-sample t-test (unknown σ)
n/a
Two independent samples Independent t-test Mann-Whitney U
One way chi-square
Two related samples Dependent t-test Wilcoxon’s T
Three or more independent
samples
Between-subjects ANOVA Kruskall-Wallis H
Three or more related samples Within-subjects ANOVA Friedman’s Chi-Square
Two independent factors Two-Way ANOVA Two-way Chi-Square
9. Primary Assumptions
•Before conducting a parametric test,
it is important for the following to be
met:
• Level of Measurement
• Normality
• Independence
• Homogeneity of Variance
11. Normality Assumption
•Rationale for hypothesis testing with
reference to sampling distribution
(most likely when n<30)
•There are other methods that
considers the error – mostly in
regression
•If not met, inferences to the
population may be inaccurate
14. Homogeneity of Variance
Assumption
•When comparing groups on the same
variable – the variance of the outcome
variable should be the same for each
group.
•When looking at the relationship
between continuous variables it’s a bit
more complicated because you want
the variance of the error terms to be
stable across all values of the other.
15. NOTE:
•If you will run a one-tailed test, divide the
outcome by 2
•If your alpha is at .05, set the level of
significance at 90% and 98% for an alpha
at .01
18. z-scores
1. Click “Analyze”
2. Move cursor to “Descriptive Statistics”
3. Select “Descriptives”
4. Click on the small box to the left “Save
As Standardized Variables”
5. SPSS will generate a new set of
variables
19. One-Sample t-test
1. Click “Analyze”
2. Move cursor to “Compare Means”
3. Select “Descriptives”
4. Move the variable to “Test Variable”
5. The “Test Value” must be the mean of the
sample (usually given in problems)
H0 = μ = test value/sample mean
H1 = μ ≠ test value/sample mean
20. Output for a One-Sample
t-test
SAMPLE INTERPRETATION:
It has been found out that the average activities of the daily
living performed by the participants after they had received
group therapy was not statistically significant with t(11) =
0.215, p = 0.834, which can be supported by the population
mean at M = 17 and the sample mean at M = 17.333.
21. Independent Samples t-
test
1. The first variable would be participant
scores and the second variable would be
groups/condition.
2. Click “Analyze”
3. Move cursor to “Compare Means”
4. Select “Independent Samples T-test”
5. Edit “Grouping Variable”
H0 = M1 = M2
H1 = M1 ≠ M2
22. Output for an
Independent Samples t-
test
SAMPLE INTERPRETATION:
With t(16) = 1.727, p = 0.103, we conclude that there is no
significant difference between the job satisfaction of the
employees whom designed their workspace as compared
to those who did not design their workspace.
24. Dependent Samples t-
test
1. “Pretest” must be separate from “Posttest
2. Click “Analyze”
3. Move cursor to “Compare Means”
4. Select “Paired Samples T-test”
5. Move pretest to “Variable 1” and posttest
to “Variable 2”
H0 = M1 = M2
H1 = M1 ≠ M2
25. Output for a Dependent
Samples t-test
SAMPLE INTERPRETATION:
We therefore conclude that there was a significant
difference in the participant’s performance of their
activities of the daily living with t(7) = -3.161, p = .016.
26. Note on Effect Size:
Unfortunately, the different
measures of effect size are
not calculated by SPSS
28. One-Way ANOVA
1. Type the scores on the first variable and
add a grouping variable on the second
2. Click “Analyze”
3. Move cursor to “Compare Means”
4. Select “One-Way ANOVA”
29. Output for a One-Way
ANOVA
SAMPLE INTERPRETATION:
With F(2, 12) = 5.293, p = .022, it has been found out that
there was at least a significant difference in the number of
activities of the daily living performed
30. Repeated Measures
ANOVA
1. Enter scores per condition on separate columns
2. Click “Analyze”
3. Select “General Linear Model”
4. Click “Repeated Measures”
5. Optional: Type “Factor 1” onto the “Repeated Measures Define
Factors” box
6. Enter the number of levels/treatment conditions
7. Click “Add”
8. Click “Define”
9. Move all variables to “Within Subjects Variables”
H0 = M1 = M2 = M3 (there is no significant difference)
H1 = M1 ≠ M2 ≠ M3 (there is at least a significant difference)
31. Independent Measures
(Factorial) ANOVA
1. Enter all scores in a single column.
2. Factor A on second column
3. Factor B on third column
4. Click “Analyze”
5. Move cursor on “General Linear Model”
6. Click “Univariant”
7. Column = Dependent Variable
8. Else = Fixed Factors
We are looking for main effects and interaction
32. Post-Hoc in ANOVA
•Tukey’s HSD
•Effect Size: eta-squared
𝜂2 =
𝑠𝑠 𝐵
𝑠𝑠 𝑇
P. S: Don’t forget to find the square root
34. Mann-Whitney U
1. Click “Analyze
2. Move cursor to “Nonparametric Tests”
3. Select “Independent Samples”
4. Go to “Fields” to see all variables in the data editor
1. Use predefined roles (if you’ve assigned)
2. Use custom field assignments if not
5. Drag the DV on “Test Fields” and IV on “Groups”
6. Select “Settings”
7. Select “Customize Tests”
8. Select “Mann-Whitney U”
35. Wilcoxon’s T
1. Click “Analyze
2. Move cursor to “Nonparametric Tests”
3. Select “Related Samples”
4. Go to “Fields” to see all variables in the data editor
1. Use predefined roles (if you’ve assigned)
2. Use custom field assignments if not
5. Drag the DV on “Test Fields” and IV on “Groups”
6. Select “Settings”
7. Select “Customize Tests”
8. Select “Wilcoxon Matched-Pair Signed-Rank”
36. Kruskal-Wallis H
1. Click “Analyze
2. Move cursor to “Nonparametric Tests”
3. Select “Independent Samples”
4. Go to “Fields” to see all variables in the data editor
1. Use predefined roles (if you’ve assigned)
2. Use custom field assignments if not
5. Drag the DV on “Test Fields” and IV on “Groups”
6. Select “Settings”
7. Select “Customize Tests”
8. Select “Kruskal-Wallis 1 Way ANOVA”
37. Friedman’s ANOVA
1. Click “Analyze
2. Move cursor to “Nonparametric Tests”
3. Select “Related Samples”
4. Go to “Fields” to see all variables in the data editor
1. Use predefined roles (if you’ve assigned)
2. Use custom field assignments if not
5. Drag the DV on “Test Fields” and IV on “Groups”
6. Select “Settings”
7. Select “Customize Tests”
8. Select “Friedman’s 2-way ANOVA”
39. 1. Enter data in two columns
For point-biserial: enter scores on first
column and enter code for the
dichotomous variable
For phi-coefficient: enter scores in any
way you prefer as long as the
variables are dichotomous (limited to 2
x 2)
1. Click “Analyze”
2. Click “Correlate”
3. Click “Bivariate”
4. Move the columns to the “Variable” box
5. Pearson must be checked. For Spearman’s
rho, check Spearman
41. Measures of Association
•Lambda
• IV & DV are nominal
• Polytomous
•Gamma
• IV & DV are ordinal or one variable is
dichotomous nominal
•Cramer’s V
• One variable is ordinal and the other one is nominal
• Polytomous
45. Forms of Regression
•Forced Entry
• Putting all variables at the same time
• Used in testing a theory
•Hierachical
• Determines if adding a new variable improves the
predictive power of the model
•Stepwise
• What variables are included/excluded
• Not useful in hypothesis testing
46. 1. Enter data in two columns
2. Put your “Y” variable to the “Dependent
Variable” box
3. Put your X(s) variable to the
“Independent Variable” box
47. Assumptions
•Level of Measurement
• Must be interval/ratio level
• Can be unmet
• Logistic Regression
• DV is dichotomous
• IV is any level of measurement
• Dummy Variable Regression
• IV is nominal level
57. Factor Analysis
• One latent variable is theorized to influence
multiple observed outcome variables
Abnormality
Deviance
Distress
Dysfunction
58. Types of Factor Analysis
•Exploratory
• Know factor structure
• How well the model fits?
• Applied when creating surveys
• Which items go together in a single factor?
•Confirmatory
• Production of model that fits the data
59. Factor Analysis
1. Click “Analyze”
2. Move cursor to “Dimension Reduction”
3. Select “Factor”
4. Select the variables of interest that would
indicate a latent construct.
60. Factor Analysis
1. Select “Extraction”
2. Check “Scree Plot”
3. Select “Scores”
4. Click on “Save as Variables”
5. Select “Regression”
- SPSS will generate a new variable so we can
a model out of it
61. Retaining Factors
• Eigenvalues
• Kaiser’s K1 Rule
• Scree Plot
• Retain before it accumulates
• Theory
• Retain factors that makes sense
64. Factor Analysis for
Multiple Factors
1. Repeat the process indicated earlier
2. To investigate loadings. Select “Rotation”
3. Select “Varimax”
4. If > 0.40, it means that the question belongs
to the given factor.
-
65. Reliability Analysis
1. Select “Analyze”
2. Move cursor to “Scale”
3. Select “Reliability Analysis”
4. 0.70 to 0.80 is considered as accepted
-0.80 is preferred for clinical fields
-Cronbach’s Alpha = inter-item consistency