SlideShare a Scribd company logo
1 of 63
Download to read offline
1
psyc3010 lecture 8
standard and hierarchical multiple regression
last week: correlation and regression
Next week: moderated regression
2
 last week we revised correlation & regression
and took a look at some of the underlying
principles of these methods [partitioning
variance into SS regression (Ŷ - Y) and SS
residual (Y - Ŷ).]
 We extended these ideas to the multiple
predictor case (multiple regression) and
touched upon indices of predictor importance
 these week we go through two full examples of
multiple regression
– standard regression
– heirarchical regression
last week  this week
3
Indices of predictor importance:
 r [Pearson or zero-order correlation] – a scale free measure of
association – the standardised covariance between two factors
 r2 [the coefficient of determination] – the proportion of variability in one
factor (e.g., the DV) accounted for by another (e.g. an IV).
 b [unstandardised slope or unstandardised regression coefficient] – a
scale dependent measure of association, the slope of the regression line –
the change in units of Y expected with a 1 unit increase in X
  [standardised slope or standardised regression coefficient] – a scale
free measure of association, the slope of the regression line if all variables
are standardised – the change in standard deviations in Y expected with a
1 standard deviation increase in X, controlling for all other predictors.  = r
in bivariate regression (when there is only one IV).
 pr2 [partial correlation squared] – a scale free measure of association
controlling for other IVs -- the proportion of residual variance in the DV
(after other IVs are controlled for) uniquely accounted for by the IV.
 sr2 [semi-partial correlation squared] – a scale free measure of
association controlling for other IVs -- the proportion of total variance in
the DV uniquely accounted for by the IV.
4
Comparing the different rs
 The zero-order (Pearson’s) correlation between IV
and DV ignores extent to which IV is correlated with
other IVs.
 The semi-partial correlation deals with unique effect of
IV on total variance in DV – usually what we are
interested in.
– Conceptually similar to eta squared
– Confusion alert: in SPSS the semi-partial r is called the part
correlation. No one else does this though.
 The partial correlation deals with unique effect of the IV
on residual variance in DV. More difficult to interpret –
most useful when other IVs = control variables.
– Conceptually similar to „partial eta squared‟
 Generally r > spr and pr > spr
5
criterion
predictor1
bivariate vs multiple regression -
model coefficient of determination
predictor2
criterion
predictor
r2
R2
6
7
the linear model – one predictor
(2D space)
predictor (X)
criterion
(Y)
Ŷ = bX + a
8
the linear model – two predictors
(3D space)
criterion
(Y)
Ŷ = b1X1 + b2X2 + a
9
the linear model – 2 predictors
 criterion scores are predicted using the best
linear combination of the predictors
– similar to the line-of-best-fit idea, but it becomes the
plane-of-best-fit
– equation derived according to the least-squares
criterion – such that (Y-Ŷ)2 is minimized
• b1 is the slope of the plane relative to the X1 axis,
• b2 is the slope relative to the X2 axis,
• a is the point where the plane intersects the Y axis (when X1
and X2 are equal to zero)
 the idea extends to 3+ predictors but becomes
tricky to represent graphically (i.e., hyperspace)
10
example
 new study...examine the amount of variance in
academic achievement (GPA) accounted for
by…
– Minutes spent studying per week (questionnaire measure)
– motivation (questionnaire measure)
– anxiety (questionnaire measure)
 can use multiple regression to asses how much
variance the predictors explain as a set (R2 )
 can also assess the relative importance of each
predictor (r, b, , pr2, sr2).
11
data table
subject study motivation anxiety GPA
(X1) (X2) (X3) (Y)
1 104 12 1 5.5
2 109 13 9 5.7
3 123 9 2 5.5
4 94 15 11 5.3
5 114 15 2 6.1
6 91 7 9 4.9
7 100 5 1 4.5
…
29 107 10 6 5.9
30 119 8 2 6.0
12
preliminary statistics
Mean SD N alpha
study time 97.967 8.915 30 .88
motivation 14.533 4.392 30 .75
anxiety 4.233 1.455 30 .85
GPA 5.551 2.163 30 .82
ST MOT ANX GPA
study time 1.00
motivation .313 1.00
anxiety .256 .536 1.00
GPA .637 .653 .505 1.00
Descriptive Statistics
Correlations
13
preliminary statistics
Mean SD N alpha
study time 97.967 8.915 30 .88
motivation 14.533 4.392 30 .75
anxiety 4.233 1.455 30 .85
GPA 5.551 2.163 30 .82
IQ MOT ANX GPA
IQ 1.00
motivation .313 1.00
anxiety .256 .536 1.00
GPA .637 .653 .505 1.00
Descriptive Statistics
Correlations
means and standard deviations are used to obtain
regression estimates, and are reported as
preliminary stats when one conducts MR. They
are needed to interpret coefficients, although
descriptively they are not as critical in MR as they
are for t-tests and anova
14
preliminary statistics
Mean SD N alpha
IQ 97.967 8.915 30 .88
motivation 14.533 4.392 30 .75
anxiety 4.233 1.455 30 .85
GPA 5.551 2.163 30 .82
ST MOT ANX GPA
study time 1.00
motivation .313 1.00
anxiety .256 .536 1.00
GPA .637 .653 .505 1.00
Descriptive Statistics
Correlations
Cronbach’s  is an index of internal
consistency (reliability) for a continuous
scale
best to use scales with high reliability (
>.70) if available – less error variance
15
preliminary statistics
Mean SD N alpha
IQ 97.967 8.915 30 .88
motivation 14.533 4.392 30 .75
anxiety 4.233 1.455 30 .85
GPA 75.533 15.163 30 .82
IQ MOT ANX GPA
IQ 1.00
motivation .313 1.00
anxiety .256 .536 1.00
GPA .637 .653 .505 1.00
Descriptive Statistics
Correlations
the correlation matrix, tells you the extent to
which each predictor is related to the criterion
(called validities), as well as intercorrelations
among predictors (collinearities).
to maximise R2 we want predictors that have high
validities and low collinearity
16
principle of parsimony:
predictor1
predictor3
predictor2
criterion
R2
predictors are:
-highly correlated
with criterion
-have low(er)
correlations with
one another
-Good
17
principle of parsimony:
predictor1
predictor3
predictor2
criterion
R2
predictors are:
-highly correlated
with criterion
-highly correlated
with one another
-Bad. Probably
would delete some
of the redundant
IVs.
18
19
regression
solution
 calculation for multiple regression requires the solution of a set of
parameters (one slope for each predictor – b values)
 E.g. with 2 IVs, the 2 slopes define the plane-of-best-fit that goes
through the 3-dimensional space described by plotting the DV
against each IVs
 Pick bs so that deviations of dots from the plane are minimized
– these weights are derived through matrix algebra - beyond the scope
of this course
 understand how with one variable, we model Y hat with a line
described by 2 parameters (bX + a);
 with two, model Y hat as a plane described by 3 parameters
(b1X1 + b2X2 + a)
 with p predictors, model Yhat as a p-dimensional hyperspace blob
with p + 1 parameters (constant, and a slope for each IV).
 So Ŷ, the predicted value of Y, is modeled with a linear composite
formed by multiplying each predictor by its regression weight /
slope / coefficient (just like a linear contrast) and adding the
constant:
Ŷ = .79ST + 1.45MOT + 1.68ANX – 95.02
 the criterion (GPA) is regressed on this linear composite
Ŷ = b1X1 + b2X2 + a
20
the linear model – two predictors
(3D space)
criterion
(Y)
Ŷ = b1X1 + b2X2 + a
21
Ŷ
the linear composite
Study
Anxiety
Motivation
criterion
Ŷ = .79IQ + 1.45MOT + 1.68ANX – 95.02
22
the linear composite
criterion
Ŷ
…so we end up with two overlapping variables just like
in bivariate regression (only one is blue and weird and wibbly,
graphically symbolising that underlying the linear relationship
between the DV and Y hat, the linear composite, is a 4-dimensional
space defined by the 3 IVs and the DV)
23
the model: R and R2
 Despite the underlying complexity, the multiple
correlation coefficient (R) is just a bivariate correlation
between the criterion (GPA) and the best linear
combination of the predictors (Ŷ)
 i.e., R2 = r2
YŶ
where Ŷ = .79ST + 1.45MOT + 1.68ANX – 95.02
 accordingly, we can treat the model R exactly like r, ie:
i. calculate R adjusted:
ii. square R to obtain amount of variance accounted for in Y by
our linear composite (Ŷ)
iii. test for statistical significance
2
N
)
1
N
)(
R
1
(
1
2




24
the model: R and R2
 1. In this example, R = .81
– so R adj =
= .798
 2. R2 = .65 (.638 adjusted)
“…therefore, 65% of the variance in participants’
GPA was explained by the combination of their
study time, motivation, and anxiety.”
2
30
)
1
30
)(
65
.
1
(
1




25
 3. The overall model (R2) is tested for significance –
– H0 – the relationship between the predictors (as a group) and
the criterion is zero
– H1 – the relationship between the predictors (as a group) and
the criterion is different from zero
the model: R and R2
23
.
16
)
6518
.
1
(
3
6518
.
)
1
3
30
(
)
R
1
(
p
R
)
1
p
N
(
2
2











F
1
p
,
p 

 N
df
r N  2
1 r2
t =
Reminder of t-test for r
26
F = df = p, N – p – 1
=
= variance accounted for / df
variance not accounted for (error) / df
= MS REGRESSION
MS RESIDUAL
Test of R2
(analysis of regression)
)
1
(
)
1
(
2
2
R
p
R
p
N



)
1
/(
)
1
(
/
2
2


 p
N
R
p
R
What we know
(can account for)
What we don’t know
(can’t account for)
27
 Or perform same test via analysis of regression:
SSY = SSRegression + SSResidual
SSY =  (Y-Y)2 = (5.5 - 5.551)2 + (5.7 - 5.551)2 …
= 6667.46
SSRegression =  (Ŷ - Y)2 = (6.22 – 5.551)2 + …
= 4346.03
SSResidual =  (Y - Ŷ)2
= SSY - SSRegression = 6667.46 - 4346.03
= 2321.43
the model: R and R2
28
Summary Table for Analysis of Regression:
the model: R and R2
1
p
,
p 

 N
df
Model
Sums of
Squares df
Mean
Square F sig
Regression 4346.03 3 1448.68 16.23 .000
Residual 2321.43 26 89.29
Total 6667.46 29
The model including study time, motivation, and
anxiety accounted for significant variation in
participants’ GPA, F(3, 26) = 16.23, p < .001, R2 = .65.
29
30
individual predictors
 we already have our bivariate correlations (r)
between each predictor and the criterion. In
addition, SPSS gives us:
– b – (unstandardised) partial regression coefficient
–  - standardised partial regression coefficient
– pr – partial correlation coefficient
– sr – semi-partial correlation coefficient
(as the calculations for these are all matrix algebra we
will bypass that….)
31
criterion
Predictor p
predictor2
pr2
pr – partial correlation coefficient
pr is the correlation between predictor p
and the criterion, with the variance shared
with the other predictors partialled out
Can write r01.2 [partial r between 0 and 1
excluding shared variance with 2]
pr2 indicates the proportion of residual
variance in the criterion (DV variance left
unexplained by the other predictors) that is
explained by predictor p
prST = .581; prIQ
2 = 33.7%
prMOT = .562 ; prMOT
2 = 31.5%
prANX = .293; prANX
2 = 8.5%.
32
predictor2
sr – semi-partial correlation coefficient
sr2
predictorp
criterion
sr is the correlation between predictor p
and the criterion, with the variance shared
with the other predictors partialled out
of predictor p
Can write r0(1.2) [partial r between 0 and (1
excluding 2)]
sr2 indicates the unique contribution to the
total variance in the DV explained by
predictor p
srST = .469; srIQ
2 = 21.9%
srMOT = .411; srMOT
2 = 16.9%
srANX = .224; srANX
2 = 5%
shared variance ≈ 21% (R2 - ∑sr2, 65-44%)
33
Ŷ = b1X1+ b2X2 + a
Ŷ = bY1.2X1 + bY2. 1X2 + a
bY1.2 first-order coefficient
bY1.2 ≠ bY1 unless r12 = 0
Ŷ = b1X1 + b2X2 + b3X3 + a
Ŷ = bY1.23X1 + bY2.13X2 + bY3.12X3 + a
bY1.23 second-order coefficient
Zero-order coefficient – doesn‟t
take other IVs into account
First-order coefficient – takes
1 other IV into account
Takes 2 other IVs into account
All reported coefficients (e.g. in SPSS) are highest order coefficients
34
tests of bs:
test importance of the predictor in the context of all
the other predictors
divide b by its standard error. df = N – p – 1
tb1 = .789247 = 3.785* ST
.208497
tb2 = 1.453540 = 3.000* MOT
.484508
tb3 = 1.678871 = 1.168 ANX
1.437221
 ST contributes significantly to prediction of DV, after controlling for the other
predictors, and so does MOT
 though a valid zero-order predictor of DV, anx does not contribute to the
prediction, given ST and MOT
35
Importance of predictors
can't rely on rs (zero-order), because the
predictors are interrelated
(predictor with a significant r may contribute nothing, once others are
included; e.g., ANX)
partial regression coefficient (bs):
adjusted for correlation of the predictor with the other
predictors
but
can't use relative magnitude of bs, because scale-
bound
(importance of a given b depends on unit and variability of measure)
36
Standardized regression
coefficients (β s):
rough estimate of relative contribution of
predictors, because use same metric
can compare β s within a regression
equation (but not necessarily across
groups & settings – in that standard
deviation of variables change)
37
β 1 = b1 .
when IVs are not correlated:
β = r
when IVs are correlated:
β s (magnitudes, signs) are affected by pattern of
correlations among the predictors
ZY = β1Z1 + β2Z2 + β3Z3 +... + βpZp
ZY = .46 ZST + .42 ZMOT + .16 ZANX
a one-SD increase in ST (with all other variables held
constant) is associated with an increase of .46 SDs in DV
Standardized regression
coefficients:
Y
s
s1
38
39
 standard
– all predictors are entered simultaneously
– each predictor is evaluated in terms of what it adds
to prediction beyond that afforded by all others
– most appropriate when IVs are not intercorrelated
 hierarchical
– predictors are entered sequentially in a pre-
specified order
– each predictor is evaluated in terms of what it adds
to prediction at its point of entry
– order of prediction based upon logic/theory
standard vs hierarchical regression
40
standard vs hierarchical
multiple regression
criterion
predictor1
predictor2
model
predictor1
predictor2
criterion
standard multiple regression:
•Model R2 assessed in 1 step
•b for each IV based on unique contribution only
IV1
IV2
41
standard vs hierarchical
multiple regression
predictor2
predictor1
predictor2
criterion
step 1
step
2
IV1 in
block1
IV2
hierarchical multiple regression:
• Model R2 assessed in > 1 step
• Each step (“block”) add more IVs
• b for first IV based on total contribution; later IV on unique contribution
predictor1
predictor2
criterion
step 1
step
2
step 1
step
2
42
some rationales for order of entry:
1. to partial out the effect of a control variable not of interest to
the study
– exactly the same idea as ancova – your „covariate‟ in this
case is the predictor entered at step 1
2. to build a sequential model according to some theory
– e.g., broad measure of personality entered at step 1, more
specific/narrow attitudinal measure entered at step 2
 order of entry is crucial to outcome and interpretation
 predictors can be entered singly or in blocks of >1
 now we will have an R, R2, b, , pr2 sr2 for EACH step
to report
 also test increment in prediction at each block:
– R2 change
– F change
hierarchical regression
43
hierarchical multiple regression
predictor1
predictor2
criterion
step 1
step
2
R
R2
F
R ch
R2 ch
F ch
step 1
step
2
model 1
44
hierarchical multiple regression
predictor1
predictor2
criterion
step 1
step
2
R
R2
F
R ch
R2 ch
F ch
step 1
step
2
model 2
R
R2
F
model 1
model 2
45
testing hierarchical models
 = full(er) model [with more variables added]
r = reduced model
)
1
/
)
1
(
)
/(
)
R
-
(R
2
2
2





f
r
f
p
N
f
R
p
p
r
f
Fchange
1
N
,
df 


 f
r
f p
p
p
r
R
f
R
change
R 2
2
2


46
47
suppose we wanted to repeat our GPA study using
hierarchical regression..
 further suppose our real interest was motivation and
study time, we just wanted to control for anxiety:
– enter anxiety at step 1
– enter motivation and study time at step 2
 preliminary statistics would be same as before
 model would be assessed sequentially
– step 1 – prediction by anxiety
– step 2 – prediction by motivation and study time
above and beyond that explained by anxiety
an example:
48
model summary
Model R R2 R2
adj
R2 ch F ch df1 df2 sig F
ch
1 .505 .255 .228 .255 9.584 1 28 .004
2 .813 .652 .612 .397 14.836 2 26 .000
change statistics
for model 1 – R and R2 are the same as bivariate
r between GPA and Anxiety (as anxiety is the only
variable in the model).
49
model summary
Model R R2 R2
adj
R2 ch F ch df1 df2 sig F
ch
1 .505 .255 .228 .255 9.584 1 28 .004
2 .813 .652 .612 .397 14.836 2 26 .000
change statistics
here R2 ch is just the same as R2 because it
simply reflects the change from zero.
50
model summary
Model R R2 R2
adj
R2 ch F ch df1 df2 sig F
ch
1 .505 .255 .228 .255 9.584 1 28 .004
2 .813 .652 .612 .397 14.836 2 26 .000
change statistics
for model 2 – R and R2 are the same as our full
standard multiple regression conducted earlier.
51
model summary
Model R R2 R2
adj
R2 ch F ch df1 df2 sig F
ch
1 .505 .255 .228 .255 9.584 1 28 .004
2 .813 .652 .612 .397 14.836 2 26 .000
change statistics
R2 ch tells us that by including study time and
motivation we increase the amount of variance
accounted for in GPA by 40%
(this is the critical bit!)
52
model summary
Model R R2 R2
adj
R2 ch F ch df1 df2 sig F
ch
1 .505 .255 .228 .255 9.584 1 28 .004
2 .813 .652 .612 .397 14.836 2 26 .000
change statistics
alternatively, R2 ch tells us that after controlling
for anxiety, study time and motivation explain
40% of the variance in GPA
53
model summary
Model R R2 R2
adj
R2 ch F ch df1 df2 sig F
ch
1 .505 .255 .228 .255 9.584 1 28 .004
2 .813 .652 .612 .397 14.836 2 26 .000
change statistics
… and F ch tells us that this increment in the
variance accounted is significant
(null hyp: R2 ch = 0)
54
Summary Table for Analysis of Regression:
anova
1
p
,
p 

 N
df
Model
Sums of
Squares df
Mean Square
F sig
1 Regression 1702.901 1 1702.901 9.584 .004
Residual 4964.567 28 177.306
Total 6667.46 29
2 Regression 4346.03 3 1448.68 16.23 .000
Residual 2321.43 26 89.29
Total 6667.46 29
details for model 1 are just the same as those
reported in the change statistics section on the
previous page (as the change was relative to zero)
55
Summary Table for Analysis of Regression:
anova
1
p
,
p 

 N
df
Model
Sums of
Squares df
Mean Square
F sig
1 Regression 1702.901 1 1702.901 9.584 .004
Residual 4964.567 28 177.306
Total 6667.46 29
2 Regression 4346.03 3 1448.68 16.23 .000
Residual 2321.43 26 89.29
Total 6667.46 29
details for model 2 test the overall significance of the
model (and are therefore exactly the same as we
would get if we had done a standard regression)
56
coefficients
1
p
,
p 

 N
df
Model B SE  t sig
1 constant -80.233 7.595 7.009 .000
ANX 5.268 1.700 .505 3.009 .004
2 constant -95.02 3 1448.68 16.23 .000
ANX 1.678 1.437 .16 1.168 .253
ST .789 .208 .42 3.785 .000
MOT 1.453 .484 .46 3.000 .005
model 1 shows the coefficients for anxiety as the
predictor of GPA (i.e., the variables included at step 1)
57
coefficients
Model B SE  t sig
1 constant -80.233 7.595 7.009 .000
ANX 5.268 1.700 .505 3.009 .004
2 constant -95.02 3 16.23 .000
ANX 1.678 1.437 .16 1.168 .253
ST .789 .208 .42 3.785 .000
MOT 1.453 .484 .46 3.000 .005
model 2 is identical to the coefficients table we
would get in standard multiple regression if all
predictors were entered simultaneously
58
summary of results
step R2 F R2 ch F ch
1 ANX .255 9.604* .255 9.584*
2 ST .651 16.23* .397 14.836*
MOT
59
some uses for hierarchical multiple
regression (HMR)
 to control for nuisance variables
– as we have done now
– logic is same as for ancova
 to test mediation (briefly covered next week)
 to test moderated relationships (interactions)
Ŷ = b1X1 + b2X2 + b3X1X2 + c
60
Difference between structure of
Standard and Hierarchical MR tests
Hierarchical Multiple Regression:
1. Tests overall model automatically
2. Tests each Block (subgrouping of
variables) separately (2 sets of Fs)
3. Tests unique effect of each IV for
variables in this block and earlier – but
βs don‟t exclude overlapping variance
with variables in later blocks
4. Does not test for interactions
automatically – but use HMR to test
manually (moderated MR next week)
5. Report each block R2 change with F
test, plus IVs‟ βs with t-tests from each
block as entered, plus final model R2
with F test, plus relevant follow-ups.
6. Depending on theory may or may not
report betas for IVs from earlier blocks
again if they change in later blocks
- Usually not for if early block = control
- Definitely yes if mediation test
Standard Multiple Regression:
1. Tests overall model R2
automatically
2. Does not test
subgroupings of variables
(Blocks)
3. Tests unique effect of
each IV (i.e., covariation
of residual DV scores with
IV once all other IVs‟
effects are controlled
(partialled out))
4. Does not test for
interactions automatically
5. Report Model R2 with F
test, plus each IVs‟ βs
with t-tests, plus relevant
follow-ups
61
 multicollinearity and singularity
– this condition occurs when predictors are highly correlated
(>.80 - 90)
– diagnosed with high intercorrelations of IVs (collinearities) and a
statistic called tolerance
– tolerance = (1 - R2
x)
– R2
x is the overlap between a particular predictor and all the
other predictors
• low tolerance = multicollinearity  singularity
• high tolerance = relatively independent predictors
– multicollinearity leads to unstable calculation of regression
coefficients (b), even though R2 may be significant
 Some additional info about suppressor variables,
handling missing data, and cross-validation is provided
in the “Practice Materials” section of the web site
some issues in SMR & HMR
62
assumptions of multiple regression
 distribution of residuals
– normality: conditional array of Y values are normally distributed
around Ŷ (assumption of normality in arrays)
– homoscedasticity: variance of Y values are constant across
different values of Ŷ (assumption of “homogeneity of variance in
arrays”)
– linearity: relationship between Ŷ and errors of prediction
– independence of errors
 scales (predictor and criterion scores)
– normality (variables are normally distributed), linearity (there is
a straight line relationship between predictors and criterion)
predictors are not singular (extremely highly correlated)
– measured using a continuous scale (interval or ratio)
63
In class next week:
 Moderated multiple regression
 Assignment 2
In the tutes:
 This week: Multiple regression, SPSS
 In 2 weeks: Moderated regression, SPSS
readings :
 Howell Ch 15
 Field Ch 5

More Related Content

Similar to Standard and Hierarchical Multiple Regression Analysis

Regression and corelation (Biostatistics)
Regression and corelation (Biostatistics)Regression and corelation (Biostatistics)
Regression and corelation (Biostatistics)Muhammadasif909
 
Module 2_ Regression Models..pptx
Module 2_ Regression Models..pptxModule 2_ Regression Models..pptx
Module 2_ Regression Models..pptxnikshaikh786
 
Logistic regression vs. logistic classifier. History of the confusion and the...
Logistic regression vs. logistic classifier. History of the confusion and the...Logistic regression vs. logistic classifier. History of the confusion and the...
Logistic regression vs. logistic classifier. History of the confusion and the...Adrian Olszewski
 
Pearson product moment correlation
Pearson product moment correlationPearson product moment correlation
Pearson product moment correlationSharlaine Ruth
 
SimpleLinearRegressionAnalysisWithExamples.ppt
SimpleLinearRegressionAnalysisWithExamples.pptSimpleLinearRegressionAnalysisWithExamples.ppt
SimpleLinearRegressionAnalysisWithExamples.pptAdnanAli861711
 
Linear regression.ppt
Linear regression.pptLinear regression.ppt
Linear regression.pptbranlymbunga1
 
Slideset Simple Linear Regression models.ppt
Slideset Simple Linear Regression models.pptSlideset Simple Linear Regression models.ppt
Slideset Simple Linear Regression models.pptrahulrkmgb09
 
Lesson 27 using statistical techniques in analyzing data
Lesson 27 using statistical techniques in analyzing dataLesson 27 using statistical techniques in analyzing data
Lesson 27 using statistical techniques in analyzing datamjlobetos
 
lecture13.ppt
lecture13.pptlecture13.ppt
lecture13.pptarkian3
 

Similar to Standard and Hierarchical Multiple Regression Analysis (20)

Regression and corelation (Biostatistics)
Regression and corelation (Biostatistics)Regression and corelation (Biostatistics)
Regression and corelation (Biostatistics)
 
Module 2_ Regression Models..pptx
Module 2_ Regression Models..pptxModule 2_ Regression Models..pptx
Module 2_ Regression Models..pptx
 
Medical statistics2
Medical statistics2Medical statistics2
Medical statistics2
 
Logistic regression vs. logistic classifier. History of the confusion and the...
Logistic regression vs. logistic classifier. History of the confusion and the...Logistic regression vs. logistic classifier. History of the confusion and the...
Logistic regression vs. logistic classifier. History of the confusion and the...
 
Regression analysis
Regression analysisRegression analysis
Regression analysis
 
Stat2013
Stat2013Stat2013
Stat2013
 
Simple Regression.pptx
Simple Regression.pptxSimple Regression.pptx
Simple Regression.pptx
 
Quantitative Methods - Level II - CFA Program
Quantitative Methods - Level II - CFA ProgramQuantitative Methods - Level II - CFA Program
Quantitative Methods - Level II - CFA Program
 
Pearson product moment correlation
Pearson product moment correlationPearson product moment correlation
Pearson product moment correlation
 
Regression analysis
Regression analysisRegression analysis
Regression analysis
 
Regression for class teaching
Regression for class teachingRegression for class teaching
Regression for class teaching
 
33851.ppt
33851.ppt33851.ppt
33851.ppt
 
SimpleLinearRegressionAnalysisWithExamples.ppt
SimpleLinearRegressionAnalysisWithExamples.pptSimpleLinearRegressionAnalysisWithExamples.ppt
SimpleLinearRegressionAnalysisWithExamples.ppt
 
Linear regression.ppt
Linear regression.pptLinear regression.ppt
Linear regression.ppt
 
lecture13.ppt
lecture13.pptlecture13.ppt
lecture13.ppt
 
lecture13.ppt
lecture13.pptlecture13.ppt
lecture13.ppt
 
Slideset Simple Linear Regression models.ppt
Slideset Simple Linear Regression models.pptSlideset Simple Linear Regression models.ppt
Slideset Simple Linear Regression models.ppt
 
lecture13.ppt
lecture13.pptlecture13.ppt
lecture13.ppt
 
Lesson 27 using statistical techniques in analyzing data
Lesson 27 using statistical techniques in analyzing dataLesson 27 using statistical techniques in analyzing data
Lesson 27 using statistical techniques in analyzing data
 
lecture13.ppt
lecture13.pptlecture13.ppt
lecture13.ppt
 

More from dawitg2

bacterial plant pathogen jeyarajesh-190413122916.pptx
bacterial plant pathogen jeyarajesh-190413122916.pptxbacterial plant pathogen jeyarajesh-190413122916.pptx
bacterial plant pathogen jeyarajesh-190413122916.pptxdawitg2
 
unit1_practical-applications-of-biotechnology.ppt
unit1_practical-applications-of-biotechnology.pptunit1_practical-applications-of-biotechnology.ppt
unit1_practical-applications-of-biotechnology.pptdawitg2
 
Introduction-to-Plant-Cell-Culture-lec1.ppt
Introduction-to-Plant-Cell-Culture-lec1.pptIntroduction-to-Plant-Cell-Culture-lec1.ppt
Introduction-to-Plant-Cell-Culture-lec1.pptdawitg2
 
fertilizers-150111082613-conversion-gate01.pdf
fertilizers-150111082613-conversion-gate01.pdffertilizers-150111082613-conversion-gate01.pdf
fertilizers-150111082613-conversion-gate01.pdfdawitg2
 
sac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdf
sac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdfsac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdf
sac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdfdawitg2
 
davindergill 135021014 -170426133338.pdf
davindergill 135021014 -170426133338.pdfdavindergill 135021014 -170426133338.pdf
davindergill 135021014 -170426133338.pdfdawitg2
 
preparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdf
preparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdfpreparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdf
preparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdfdawitg2
 
pesticide stoeage storage ppwscript.ppt
pesticide stoeage storage  ppwscript.pptpesticide stoeage storage  ppwscript.ppt
pesticide stoeage storage ppwscript.pptdawitg2
 
-pre-cautions--on--seed-storage--1-.pptx
-pre-cautions--on--seed-storage--1-.pptx-pre-cautions--on--seed-storage--1-.pptx
-pre-cautions--on--seed-storage--1-.pptxdawitg2
 
Intgrated pest management 02 _ 06_05.pdf
Intgrated pest management 02 _ 06_05.pdfIntgrated pest management 02 _ 06_05.pdf
Intgrated pest management 02 _ 06_05.pdfdawitg2
 
major potato and tomato disease in ethiopia .pptx
major potato and tomato disease in ethiopia .pptxmajor potato and tomato disease in ethiopia .pptx
major potato and tomato disease in ethiopia .pptxdawitg2
 
pesticide formulation pp602lec1to2-211207073233.pdf
pesticide formulation pp602lec1to2-211207073233.pdfpesticide formulation pp602lec1to2-211207073233.pdf
pesticide formulation pp602lec1to2-211207073233.pdfdawitg2
 
diseasespests-2013-130708184617-phpapp02.pptx
diseasespests-2013-130708184617-phpapp02.pptxdiseasespests-2013-130708184617-phpapp02.pptx
diseasespests-2013-130708184617-phpapp02.pptxdawitg2
 
pesticide pp602lec1to2-211207073233.pptx
pesticide pp602lec1to2-211207073233.pptxpesticide pp602lec1to2-211207073233.pptx
pesticide pp602lec1to2-211207073233.pptxdawitg2
 
agricultaraly important agrochemicals.ppt
agricultaraly important agrochemicals.pptagricultaraly important agrochemicals.ppt
agricultaraly important agrochemicals.pptdawitg2
 
agri pesticide chemistry-180722174627.pdf
agri pesticide chemistry-180722174627.pdfagri pesticide chemistry-180722174627.pdf
agri pesticide chemistry-180722174627.pdfdawitg2
 
372922285 -important Fungal-Nutrition.ppt
372922285 -important Fungal-Nutrition.ppt372922285 -important Fungal-Nutrition.ppt
372922285 -important Fungal-Nutrition.pptdawitg2
 
disease development and pathogenesis-201118142432.pptx
disease development and pathogenesis-201118142432.pptxdisease development and pathogenesis-201118142432.pptx
disease development and pathogenesis-201118142432.pptxdawitg2
 
ppp211lecture8-221211055228-824cf9da.pptx
ppp211lecture8-221211055228-824cf9da.pptxppp211lecture8-221211055228-824cf9da.pptx
ppp211lecture8-221211055228-824cf9da.pptxdawitg2
 
defensemechanismsinplants-180308104711.pptx
defensemechanismsinplants-180308104711.pptxdefensemechanismsinplants-180308104711.pptx
defensemechanismsinplants-180308104711.pptxdawitg2
 

More from dawitg2 (20)

bacterial plant pathogen jeyarajesh-190413122916.pptx
bacterial plant pathogen jeyarajesh-190413122916.pptxbacterial plant pathogen jeyarajesh-190413122916.pptx
bacterial plant pathogen jeyarajesh-190413122916.pptx
 
unit1_practical-applications-of-biotechnology.ppt
unit1_practical-applications-of-biotechnology.pptunit1_practical-applications-of-biotechnology.ppt
unit1_practical-applications-of-biotechnology.ppt
 
Introduction-to-Plant-Cell-Culture-lec1.ppt
Introduction-to-Plant-Cell-Culture-lec1.pptIntroduction-to-Plant-Cell-Culture-lec1.ppt
Introduction-to-Plant-Cell-Culture-lec1.ppt
 
fertilizers-150111082613-conversion-gate01.pdf
fertilizers-150111082613-conversion-gate01.pdffertilizers-150111082613-conversion-gate01.pdf
fertilizers-150111082613-conversion-gate01.pdf
 
sac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdf
sac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdfsac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdf
sac-301-manuresfertilizersandsoilfertilitymanagement-210105122952.pdf
 
davindergill 135021014 -170426133338.pdf
davindergill 135021014 -170426133338.pdfdavindergill 135021014 -170426133338.pdf
davindergill 135021014 -170426133338.pdf
 
preparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdf
preparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdfpreparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdf
preparationofdifferentagro-chemicaldosesforfield-140203005533-phpapp02.pdf
 
pesticide stoeage storage ppwscript.ppt
pesticide stoeage storage  ppwscript.pptpesticide stoeage storage  ppwscript.ppt
pesticide stoeage storage ppwscript.ppt
 
-pre-cautions--on--seed-storage--1-.pptx
-pre-cautions--on--seed-storage--1-.pptx-pre-cautions--on--seed-storage--1-.pptx
-pre-cautions--on--seed-storage--1-.pptx
 
Intgrated pest management 02 _ 06_05.pdf
Intgrated pest management 02 _ 06_05.pdfIntgrated pest management 02 _ 06_05.pdf
Intgrated pest management 02 _ 06_05.pdf
 
major potato and tomato disease in ethiopia .pptx
major potato and tomato disease in ethiopia .pptxmajor potato and tomato disease in ethiopia .pptx
major potato and tomato disease in ethiopia .pptx
 
pesticide formulation pp602lec1to2-211207073233.pdf
pesticide formulation pp602lec1to2-211207073233.pdfpesticide formulation pp602lec1to2-211207073233.pdf
pesticide formulation pp602lec1to2-211207073233.pdf
 
diseasespests-2013-130708184617-phpapp02.pptx
diseasespests-2013-130708184617-phpapp02.pptxdiseasespests-2013-130708184617-phpapp02.pptx
diseasespests-2013-130708184617-phpapp02.pptx
 
pesticide pp602lec1to2-211207073233.pptx
pesticide pp602lec1to2-211207073233.pptxpesticide pp602lec1to2-211207073233.pptx
pesticide pp602lec1to2-211207073233.pptx
 
agricultaraly important agrochemicals.ppt
agricultaraly important agrochemicals.pptagricultaraly important agrochemicals.ppt
agricultaraly important agrochemicals.ppt
 
agri pesticide chemistry-180722174627.pdf
agri pesticide chemistry-180722174627.pdfagri pesticide chemistry-180722174627.pdf
agri pesticide chemistry-180722174627.pdf
 
372922285 -important Fungal-Nutrition.ppt
372922285 -important Fungal-Nutrition.ppt372922285 -important Fungal-Nutrition.ppt
372922285 -important Fungal-Nutrition.ppt
 
disease development and pathogenesis-201118142432.pptx
disease development and pathogenesis-201118142432.pptxdisease development and pathogenesis-201118142432.pptx
disease development and pathogenesis-201118142432.pptx
 
ppp211lecture8-221211055228-824cf9da.pptx
ppp211lecture8-221211055228-824cf9da.pptxppp211lecture8-221211055228-824cf9da.pptx
ppp211lecture8-221211055228-824cf9da.pptx
 
defensemechanismsinplants-180308104711.pptx
defensemechanismsinplants-180308104711.pptxdefensemechanismsinplants-180308104711.pptx
defensemechanismsinplants-180308104711.pptx
 

Recently uploaded

Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonJericReyAuditor
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 

Recently uploaded (20)

Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lesson
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 

Standard and Hierarchical Multiple Regression Analysis

  • 1. 1 psyc3010 lecture 8 standard and hierarchical multiple regression last week: correlation and regression Next week: moderated regression
  • 2. 2  last week we revised correlation & regression and took a look at some of the underlying principles of these methods [partitioning variance into SS regression (Ŷ - Y) and SS residual (Y - Ŷ).]  We extended these ideas to the multiple predictor case (multiple regression) and touched upon indices of predictor importance  these week we go through two full examples of multiple regression – standard regression – heirarchical regression last week  this week
  • 3. 3 Indices of predictor importance:  r [Pearson or zero-order correlation] – a scale free measure of association – the standardised covariance between two factors  r2 [the coefficient of determination] – the proportion of variability in one factor (e.g., the DV) accounted for by another (e.g. an IV).  b [unstandardised slope or unstandardised regression coefficient] – a scale dependent measure of association, the slope of the regression line – the change in units of Y expected with a 1 unit increase in X   [standardised slope or standardised regression coefficient] – a scale free measure of association, the slope of the regression line if all variables are standardised – the change in standard deviations in Y expected with a 1 standard deviation increase in X, controlling for all other predictors.  = r in bivariate regression (when there is only one IV).  pr2 [partial correlation squared] – a scale free measure of association controlling for other IVs -- the proportion of residual variance in the DV (after other IVs are controlled for) uniquely accounted for by the IV.  sr2 [semi-partial correlation squared] – a scale free measure of association controlling for other IVs -- the proportion of total variance in the DV uniquely accounted for by the IV.
  • 4. 4 Comparing the different rs  The zero-order (Pearson’s) correlation between IV and DV ignores extent to which IV is correlated with other IVs.  The semi-partial correlation deals with unique effect of IV on total variance in DV – usually what we are interested in. – Conceptually similar to eta squared – Confusion alert: in SPSS the semi-partial r is called the part correlation. No one else does this though.  The partial correlation deals with unique effect of the IV on residual variance in DV. More difficult to interpret – most useful when other IVs = control variables. – Conceptually similar to „partial eta squared‟  Generally r > spr and pr > spr
  • 5. 5 criterion predictor1 bivariate vs multiple regression - model coefficient of determination predictor2 criterion predictor r2 R2
  • 6. 6
  • 7. 7 the linear model – one predictor (2D space) predictor (X) criterion (Y) Ŷ = bX + a
  • 8. 8 the linear model – two predictors (3D space) criterion (Y) Ŷ = b1X1 + b2X2 + a
  • 9. 9 the linear model – 2 predictors  criterion scores are predicted using the best linear combination of the predictors – similar to the line-of-best-fit idea, but it becomes the plane-of-best-fit – equation derived according to the least-squares criterion – such that (Y-Ŷ)2 is minimized • b1 is the slope of the plane relative to the X1 axis, • b2 is the slope relative to the X2 axis, • a is the point where the plane intersects the Y axis (when X1 and X2 are equal to zero)  the idea extends to 3+ predictors but becomes tricky to represent graphically (i.e., hyperspace)
  • 10. 10 example  new study...examine the amount of variance in academic achievement (GPA) accounted for by… – Minutes spent studying per week (questionnaire measure) – motivation (questionnaire measure) – anxiety (questionnaire measure)  can use multiple regression to asses how much variance the predictors explain as a set (R2 )  can also assess the relative importance of each predictor (r, b, , pr2, sr2).
  • 11. 11 data table subject study motivation anxiety GPA (X1) (X2) (X3) (Y) 1 104 12 1 5.5 2 109 13 9 5.7 3 123 9 2 5.5 4 94 15 11 5.3 5 114 15 2 6.1 6 91 7 9 4.9 7 100 5 1 4.5 … 29 107 10 6 5.9 30 119 8 2 6.0
  • 12. 12 preliminary statistics Mean SD N alpha study time 97.967 8.915 30 .88 motivation 14.533 4.392 30 .75 anxiety 4.233 1.455 30 .85 GPA 5.551 2.163 30 .82 ST MOT ANX GPA study time 1.00 motivation .313 1.00 anxiety .256 .536 1.00 GPA .637 .653 .505 1.00 Descriptive Statistics Correlations
  • 13. 13 preliminary statistics Mean SD N alpha study time 97.967 8.915 30 .88 motivation 14.533 4.392 30 .75 anxiety 4.233 1.455 30 .85 GPA 5.551 2.163 30 .82 IQ MOT ANX GPA IQ 1.00 motivation .313 1.00 anxiety .256 .536 1.00 GPA .637 .653 .505 1.00 Descriptive Statistics Correlations means and standard deviations are used to obtain regression estimates, and are reported as preliminary stats when one conducts MR. They are needed to interpret coefficients, although descriptively they are not as critical in MR as they are for t-tests and anova
  • 14. 14 preliminary statistics Mean SD N alpha IQ 97.967 8.915 30 .88 motivation 14.533 4.392 30 .75 anxiety 4.233 1.455 30 .85 GPA 5.551 2.163 30 .82 ST MOT ANX GPA study time 1.00 motivation .313 1.00 anxiety .256 .536 1.00 GPA .637 .653 .505 1.00 Descriptive Statistics Correlations Cronbach’s  is an index of internal consistency (reliability) for a continuous scale best to use scales with high reliability ( >.70) if available – less error variance
  • 15. 15 preliminary statistics Mean SD N alpha IQ 97.967 8.915 30 .88 motivation 14.533 4.392 30 .75 anxiety 4.233 1.455 30 .85 GPA 75.533 15.163 30 .82 IQ MOT ANX GPA IQ 1.00 motivation .313 1.00 anxiety .256 .536 1.00 GPA .637 .653 .505 1.00 Descriptive Statistics Correlations the correlation matrix, tells you the extent to which each predictor is related to the criterion (called validities), as well as intercorrelations among predictors (collinearities). to maximise R2 we want predictors that have high validities and low collinearity
  • 16. 16 principle of parsimony: predictor1 predictor3 predictor2 criterion R2 predictors are: -highly correlated with criterion -have low(er) correlations with one another -Good
  • 17. 17 principle of parsimony: predictor1 predictor3 predictor2 criterion R2 predictors are: -highly correlated with criterion -highly correlated with one another -Bad. Probably would delete some of the redundant IVs.
  • 18. 18
  • 19. 19 regression solution  calculation for multiple regression requires the solution of a set of parameters (one slope for each predictor – b values)  E.g. with 2 IVs, the 2 slopes define the plane-of-best-fit that goes through the 3-dimensional space described by plotting the DV against each IVs  Pick bs so that deviations of dots from the plane are minimized – these weights are derived through matrix algebra - beyond the scope of this course  understand how with one variable, we model Y hat with a line described by 2 parameters (bX + a);  with two, model Y hat as a plane described by 3 parameters (b1X1 + b2X2 + a)  with p predictors, model Yhat as a p-dimensional hyperspace blob with p + 1 parameters (constant, and a slope for each IV).  So Ŷ, the predicted value of Y, is modeled with a linear composite formed by multiplying each predictor by its regression weight / slope / coefficient (just like a linear contrast) and adding the constant: Ŷ = .79ST + 1.45MOT + 1.68ANX – 95.02  the criterion (GPA) is regressed on this linear composite Ŷ = b1X1 + b2X2 + a
  • 20. 20 the linear model – two predictors (3D space) criterion (Y) Ŷ = b1X1 + b2X2 + a
  • 22. 22 the linear composite criterion Ŷ …so we end up with two overlapping variables just like in bivariate regression (only one is blue and weird and wibbly, graphically symbolising that underlying the linear relationship between the DV and Y hat, the linear composite, is a 4-dimensional space defined by the 3 IVs and the DV)
  • 23. 23 the model: R and R2  Despite the underlying complexity, the multiple correlation coefficient (R) is just a bivariate correlation between the criterion (GPA) and the best linear combination of the predictors (Ŷ)  i.e., R2 = r2 YŶ where Ŷ = .79ST + 1.45MOT + 1.68ANX – 95.02  accordingly, we can treat the model R exactly like r, ie: i. calculate R adjusted: ii. square R to obtain amount of variance accounted for in Y by our linear composite (Ŷ) iii. test for statistical significance 2 N ) 1 N )( R 1 ( 1 2    
  • 24. 24 the model: R and R2  1. In this example, R = .81 – so R adj = = .798  2. R2 = .65 (.638 adjusted) “…therefore, 65% of the variance in participants’ GPA was explained by the combination of their study time, motivation, and anxiety.” 2 30 ) 1 30 )( 65 . 1 ( 1    
  • 25. 25  3. The overall model (R2) is tested for significance – – H0 – the relationship between the predictors (as a group) and the criterion is zero – H1 – the relationship between the predictors (as a group) and the criterion is different from zero the model: R and R2 23 . 16 ) 6518 . 1 ( 3 6518 . ) 1 3 30 ( ) R 1 ( p R ) 1 p N ( 2 2            F 1 p , p    N df r N  2 1 r2 t = Reminder of t-test for r
  • 26. 26 F = df = p, N – p – 1 = = variance accounted for / df variance not accounted for (error) / df = MS REGRESSION MS RESIDUAL Test of R2 (analysis of regression) ) 1 ( ) 1 ( 2 2 R p R p N    ) 1 /( ) 1 ( / 2 2    p N R p R What we know (can account for) What we don’t know (can’t account for)
  • 27. 27  Or perform same test via analysis of regression: SSY = SSRegression + SSResidual SSY =  (Y-Y)2 = (5.5 - 5.551)2 + (5.7 - 5.551)2 … = 6667.46 SSRegression =  (Ŷ - Y)2 = (6.22 – 5.551)2 + … = 4346.03 SSResidual =  (Y - Ŷ)2 = SSY - SSRegression = 6667.46 - 4346.03 = 2321.43 the model: R and R2
  • 28. 28 Summary Table for Analysis of Regression: the model: R and R2 1 p , p    N df Model Sums of Squares df Mean Square F sig Regression 4346.03 3 1448.68 16.23 .000 Residual 2321.43 26 89.29 Total 6667.46 29 The model including study time, motivation, and anxiety accounted for significant variation in participants’ GPA, F(3, 26) = 16.23, p < .001, R2 = .65.
  • 29. 29
  • 30. 30 individual predictors  we already have our bivariate correlations (r) between each predictor and the criterion. In addition, SPSS gives us: – b – (unstandardised) partial regression coefficient –  - standardised partial regression coefficient – pr – partial correlation coefficient – sr – semi-partial correlation coefficient (as the calculations for these are all matrix algebra we will bypass that….)
  • 31. 31 criterion Predictor p predictor2 pr2 pr – partial correlation coefficient pr is the correlation between predictor p and the criterion, with the variance shared with the other predictors partialled out Can write r01.2 [partial r between 0 and 1 excluding shared variance with 2] pr2 indicates the proportion of residual variance in the criterion (DV variance left unexplained by the other predictors) that is explained by predictor p prST = .581; prIQ 2 = 33.7% prMOT = .562 ; prMOT 2 = 31.5% prANX = .293; prANX 2 = 8.5%.
  • 32. 32 predictor2 sr – semi-partial correlation coefficient sr2 predictorp criterion sr is the correlation between predictor p and the criterion, with the variance shared with the other predictors partialled out of predictor p Can write r0(1.2) [partial r between 0 and (1 excluding 2)] sr2 indicates the unique contribution to the total variance in the DV explained by predictor p srST = .469; srIQ 2 = 21.9% srMOT = .411; srMOT 2 = 16.9% srANX = .224; srANX 2 = 5% shared variance ≈ 21% (R2 - ∑sr2, 65-44%)
  • 33. 33 Ŷ = b1X1+ b2X2 + a Ŷ = bY1.2X1 + bY2. 1X2 + a bY1.2 first-order coefficient bY1.2 ≠ bY1 unless r12 = 0 Ŷ = b1X1 + b2X2 + b3X3 + a Ŷ = bY1.23X1 + bY2.13X2 + bY3.12X3 + a bY1.23 second-order coefficient Zero-order coefficient – doesn‟t take other IVs into account First-order coefficient – takes 1 other IV into account Takes 2 other IVs into account All reported coefficients (e.g. in SPSS) are highest order coefficients
  • 34. 34 tests of bs: test importance of the predictor in the context of all the other predictors divide b by its standard error. df = N – p – 1 tb1 = .789247 = 3.785* ST .208497 tb2 = 1.453540 = 3.000* MOT .484508 tb3 = 1.678871 = 1.168 ANX 1.437221  ST contributes significantly to prediction of DV, after controlling for the other predictors, and so does MOT  though a valid zero-order predictor of DV, anx does not contribute to the prediction, given ST and MOT
  • 35. 35 Importance of predictors can't rely on rs (zero-order), because the predictors are interrelated (predictor with a significant r may contribute nothing, once others are included; e.g., ANX) partial regression coefficient (bs): adjusted for correlation of the predictor with the other predictors but can't use relative magnitude of bs, because scale- bound (importance of a given b depends on unit and variability of measure)
  • 36. 36 Standardized regression coefficients (β s): rough estimate of relative contribution of predictors, because use same metric can compare β s within a regression equation (but not necessarily across groups & settings – in that standard deviation of variables change)
  • 37. 37 β 1 = b1 . when IVs are not correlated: β = r when IVs are correlated: β s (magnitudes, signs) are affected by pattern of correlations among the predictors ZY = β1Z1 + β2Z2 + β3Z3 +... + βpZp ZY = .46 ZST + .42 ZMOT + .16 ZANX a one-SD increase in ST (with all other variables held constant) is associated with an increase of .46 SDs in DV Standardized regression coefficients: Y s s1
  • 38. 38
  • 39. 39  standard – all predictors are entered simultaneously – each predictor is evaluated in terms of what it adds to prediction beyond that afforded by all others – most appropriate when IVs are not intercorrelated  hierarchical – predictors are entered sequentially in a pre- specified order – each predictor is evaluated in terms of what it adds to prediction at its point of entry – order of prediction based upon logic/theory standard vs hierarchical regression
  • 40. 40 standard vs hierarchical multiple regression criterion predictor1 predictor2 model predictor1 predictor2 criterion standard multiple regression: •Model R2 assessed in 1 step •b for each IV based on unique contribution only IV1 IV2
  • 41. 41 standard vs hierarchical multiple regression predictor2 predictor1 predictor2 criterion step 1 step 2 IV1 in block1 IV2 hierarchical multiple regression: • Model R2 assessed in > 1 step • Each step (“block”) add more IVs • b for first IV based on total contribution; later IV on unique contribution predictor1 predictor2 criterion step 1 step 2 step 1 step 2
  • 42. 42 some rationales for order of entry: 1. to partial out the effect of a control variable not of interest to the study – exactly the same idea as ancova – your „covariate‟ in this case is the predictor entered at step 1 2. to build a sequential model according to some theory – e.g., broad measure of personality entered at step 1, more specific/narrow attitudinal measure entered at step 2  order of entry is crucial to outcome and interpretation  predictors can be entered singly or in blocks of >1  now we will have an R, R2, b, , pr2 sr2 for EACH step to report  also test increment in prediction at each block: – R2 change – F change hierarchical regression
  • 43. 43 hierarchical multiple regression predictor1 predictor2 criterion step 1 step 2 R R2 F R ch R2 ch F ch step 1 step 2 model 1
  • 44. 44 hierarchical multiple regression predictor1 predictor2 criterion step 1 step 2 R R2 F R ch R2 ch F ch step 1 step 2 model 2 R R2 F model 1 model 2
  • 45. 45 testing hierarchical models  = full(er) model [with more variables added] r = reduced model ) 1 / ) 1 ( ) /( ) R - (R 2 2 2      f r f p N f R p p r f Fchange 1 N , df     f r f p p p r R f R change R 2 2 2  
  • 46. 46
  • 47. 47 suppose we wanted to repeat our GPA study using hierarchical regression..  further suppose our real interest was motivation and study time, we just wanted to control for anxiety: – enter anxiety at step 1 – enter motivation and study time at step 2  preliminary statistics would be same as before  model would be assessed sequentially – step 1 – prediction by anxiety – step 2 – prediction by motivation and study time above and beyond that explained by anxiety an example:
  • 48. 48 model summary Model R R2 R2 adj R2 ch F ch df1 df2 sig F ch 1 .505 .255 .228 .255 9.584 1 28 .004 2 .813 .652 .612 .397 14.836 2 26 .000 change statistics for model 1 – R and R2 are the same as bivariate r between GPA and Anxiety (as anxiety is the only variable in the model).
  • 49. 49 model summary Model R R2 R2 adj R2 ch F ch df1 df2 sig F ch 1 .505 .255 .228 .255 9.584 1 28 .004 2 .813 .652 .612 .397 14.836 2 26 .000 change statistics here R2 ch is just the same as R2 because it simply reflects the change from zero.
  • 50. 50 model summary Model R R2 R2 adj R2 ch F ch df1 df2 sig F ch 1 .505 .255 .228 .255 9.584 1 28 .004 2 .813 .652 .612 .397 14.836 2 26 .000 change statistics for model 2 – R and R2 are the same as our full standard multiple regression conducted earlier.
  • 51. 51 model summary Model R R2 R2 adj R2 ch F ch df1 df2 sig F ch 1 .505 .255 .228 .255 9.584 1 28 .004 2 .813 .652 .612 .397 14.836 2 26 .000 change statistics R2 ch tells us that by including study time and motivation we increase the amount of variance accounted for in GPA by 40% (this is the critical bit!)
  • 52. 52 model summary Model R R2 R2 adj R2 ch F ch df1 df2 sig F ch 1 .505 .255 .228 .255 9.584 1 28 .004 2 .813 .652 .612 .397 14.836 2 26 .000 change statistics alternatively, R2 ch tells us that after controlling for anxiety, study time and motivation explain 40% of the variance in GPA
  • 53. 53 model summary Model R R2 R2 adj R2 ch F ch df1 df2 sig F ch 1 .505 .255 .228 .255 9.584 1 28 .004 2 .813 .652 .612 .397 14.836 2 26 .000 change statistics … and F ch tells us that this increment in the variance accounted is significant (null hyp: R2 ch = 0)
  • 54. 54 Summary Table for Analysis of Regression: anova 1 p , p    N df Model Sums of Squares df Mean Square F sig 1 Regression 1702.901 1 1702.901 9.584 .004 Residual 4964.567 28 177.306 Total 6667.46 29 2 Regression 4346.03 3 1448.68 16.23 .000 Residual 2321.43 26 89.29 Total 6667.46 29 details for model 1 are just the same as those reported in the change statistics section on the previous page (as the change was relative to zero)
  • 55. 55 Summary Table for Analysis of Regression: anova 1 p , p    N df Model Sums of Squares df Mean Square F sig 1 Regression 1702.901 1 1702.901 9.584 .004 Residual 4964.567 28 177.306 Total 6667.46 29 2 Regression 4346.03 3 1448.68 16.23 .000 Residual 2321.43 26 89.29 Total 6667.46 29 details for model 2 test the overall significance of the model (and are therefore exactly the same as we would get if we had done a standard regression)
  • 56. 56 coefficients 1 p , p    N df Model B SE  t sig 1 constant -80.233 7.595 7.009 .000 ANX 5.268 1.700 .505 3.009 .004 2 constant -95.02 3 1448.68 16.23 .000 ANX 1.678 1.437 .16 1.168 .253 ST .789 .208 .42 3.785 .000 MOT 1.453 .484 .46 3.000 .005 model 1 shows the coefficients for anxiety as the predictor of GPA (i.e., the variables included at step 1)
  • 57. 57 coefficients Model B SE  t sig 1 constant -80.233 7.595 7.009 .000 ANX 5.268 1.700 .505 3.009 .004 2 constant -95.02 3 16.23 .000 ANX 1.678 1.437 .16 1.168 .253 ST .789 .208 .42 3.785 .000 MOT 1.453 .484 .46 3.000 .005 model 2 is identical to the coefficients table we would get in standard multiple regression if all predictors were entered simultaneously
  • 58. 58 summary of results step R2 F R2 ch F ch 1 ANX .255 9.604* .255 9.584* 2 ST .651 16.23* .397 14.836* MOT
  • 59. 59 some uses for hierarchical multiple regression (HMR)  to control for nuisance variables – as we have done now – logic is same as for ancova  to test mediation (briefly covered next week)  to test moderated relationships (interactions) Ŷ = b1X1 + b2X2 + b3X1X2 + c
  • 60. 60 Difference between structure of Standard and Hierarchical MR tests Hierarchical Multiple Regression: 1. Tests overall model automatically 2. Tests each Block (subgrouping of variables) separately (2 sets of Fs) 3. Tests unique effect of each IV for variables in this block and earlier – but βs don‟t exclude overlapping variance with variables in later blocks 4. Does not test for interactions automatically – but use HMR to test manually (moderated MR next week) 5. Report each block R2 change with F test, plus IVs‟ βs with t-tests from each block as entered, plus final model R2 with F test, plus relevant follow-ups. 6. Depending on theory may or may not report betas for IVs from earlier blocks again if they change in later blocks - Usually not for if early block = control - Definitely yes if mediation test Standard Multiple Regression: 1. Tests overall model R2 automatically 2. Does not test subgroupings of variables (Blocks) 3. Tests unique effect of each IV (i.e., covariation of residual DV scores with IV once all other IVs‟ effects are controlled (partialled out)) 4. Does not test for interactions automatically 5. Report Model R2 with F test, plus each IVs‟ βs with t-tests, plus relevant follow-ups
  • 61. 61  multicollinearity and singularity – this condition occurs when predictors are highly correlated (>.80 - 90) – diagnosed with high intercorrelations of IVs (collinearities) and a statistic called tolerance – tolerance = (1 - R2 x) – R2 x is the overlap between a particular predictor and all the other predictors • low tolerance = multicollinearity  singularity • high tolerance = relatively independent predictors – multicollinearity leads to unstable calculation of regression coefficients (b), even though R2 may be significant  Some additional info about suppressor variables, handling missing data, and cross-validation is provided in the “Practice Materials” section of the web site some issues in SMR & HMR
  • 62. 62 assumptions of multiple regression  distribution of residuals – normality: conditional array of Y values are normally distributed around Ŷ (assumption of normality in arrays) – homoscedasticity: variance of Y values are constant across different values of Ŷ (assumption of “homogeneity of variance in arrays”) – linearity: relationship between Ŷ and errors of prediction – independence of errors  scales (predictor and criterion scores) – normality (variables are normally distributed), linearity (there is a straight line relationship between predictors and criterion) predictors are not singular (extremely highly correlated) – measured using a continuous scale (interval or ratio)
  • 63. 63 In class next week:  Moderated multiple regression  Assignment 2 In the tutes:  This week: Multiple regression, SPSS  In 2 weeks: Moderated regression, SPSS readings :  Howell Ch 15  Field Ch 5