ORDINAL LOGISTIC REGRESSION
Dr. Athar Khan
matharm@yahoo.com
3/29/2020 DR ATHAR KHAN 1
Ordinal logistic regression (OLR) is generally used when you
have categories for the dependent variable that are ordered
(i.e., are ranked).
When the proportional odds assumption is violated, then
MLR provides a viable alternative to OLR. The proportional
odds assumption essentially states that the relationship
between the independent variable and dependent variable
is constant, irrespective of which groups are being
compared on the dependent variable (see Osborne, 2015,
2017).
Overview
3/29/2020 DR ATHAR KHAN 2
▪ Logistic Regression is a version of multiple regression
where the outcome variable is binary (dichotomous),
meaning there are only two possible outcomes. The
model can be used to calculate the probability of one of
the two outcomes occurring over the other for a given
case/observation by using the values of a set of known
explanatory variables.
▪ Logits are basically transformations of existing binary
outcome variable data points into a probability P (ranging
from 0 to 1).
▪ A logit curve is therefore a graph of these logits plotted
against an explanatory variable.
Overview
3/29/2020 DR ATHAR KHAN 3
▪ -2 log-likelihood (-2LL) provides us with an indication of
the total error that is in a logistic regression model. The
larger the value of the -2LL the less accurate the
predictions of the model
Overview
3/29/2020 DR ATHAR KHAN 4
Overview
3/29/2020 DR ATHAR KHAN 5
▪ The Log of the OR, sometimes called the logit is a
mathematical transformation of the odds which will help
in creating a regression model.
Overview
3/29/2020 DR ATHAR KHAN 6
Scenario: Let’s say you are a researcher studying predictors
of student interest. You collect data from 200 students on
several variables.
INDEPENDENT VARIABLES
“Pass” indicates whether a student passed (coded 1) or failed
(coded 0) a previous subject matter test.
“Masteryg” is mastery goals (higher scores indicate greater
mastery goals).
“Fearfail” is fear of failure (higher scores indicate greater fear
of failure). “Masteryg” and “Fearfail” are treated as
continuous variables.
“Genderid” is a binary variable (like pass), dummy coded
0=identified male, 1=identified female.
3/29/2020 DR ATHAR KHAN 7
Scenario: Let’s say you are a researcher studying predictors
of student interest. You collect data from 200 students on
several variables.
DEPENDENT VARIABLES
“Interestlev’ is an ordered, categorical variable indicating
students’ self-reported interest for the next topic in class. It is
coded 1=low interest, 2=medium interest, 3=high interest).
3/29/2020 DR ATHAR KHAN 8
Ordinal logistic regression (using SPSS): Route 1
3/29/2020 DR ATHAR KHAN 9
3/29/2020 DR ATHAR KHAN 10
Here, we place “Interestlev” variable in the dependent box and remaining
variables (IV’s) in the Covariate(s) box. Although they are categorical variables,
we can include “pass” and “genderid" as covariates.
However, if you have categorical variables with more than two levels, then you
must use the “factor(s)” box for them. [FYI, would have also entered the
above variables as factors, but I prefer having control over the designation of
the reference category; SPSS defaults by treating the category with the
higher value as the reference category]
3/29/2020 DR ATHAR KHAN 11
Categorical (nominal or ordinal) explanatory variables are
entered to the Factor(s) box, so this is where we
enter ethnic2 and gender. Continuous explanatory variables
(in this case sec2) are entered as covariates.
3/29/2020 DR ATHAR KHAN 12
Click “OUTPUT”
Select “Test of parallel lines” provides a test of the proportional
odds assumption.
3/29/2020 DR ATHAR KHAN 13
The Case Processing Summary tells you the proportion of
cases falling at each level of the dependent variable
(Interestlev).
1=Low interest
2=Medium interest,
3=High interest
3/29/2020 DR ATHAR KHAN 14
The Model Fitting Information (see right) contains the -2 Log
Likelihood for:
Intercept only (or null) model and the Full Model
(containing the full set of predictors).
We also have a likelihood ratio chi-square test to test
whether there is a significant improvement in fit of the Final
model relative to the Intercept only model.
In this case, we see a significant improvement in fit of the
Final model over the null model [χ²(4)=30.249, p<.001].
3/29/2020 DR ATHAR KHAN 15
We compare the final model against the baseline to see whether it has
significantly improved the fit to the data. The Model fitting
Information table gives the -2 log-likelihood values for the baseline and
the final model, and SPSS performs a chi-square to test the difference
between the -2LL for the two models.
The statistically significant chi-square statistic (p<.0005) indicates that
the Final model gives a significant improvement over the baseline
intercept-only model. This tells you that the final model gives better
predictions3/29/2020 DR ATHAR KHAN 16
The “Goodness of Fit” table contains the Deviance and
Pearson chi-square tests, which are useful for determining
whether a model exhibits good fit to the data. Non-
significant test results are indicators that the model fits the
data well (Field, 2018; Petrucci, 2009).
Deviance (-2LL)
This is the log-likelihood multiplied by -2 and is commonly used to explore
how well a logistic regression model fits the data.
The lower this value is the better your model is at predicting your binary
outcome variable.
3/29/2020 DR ATHAR KHAN 17
In this analysis, we see that both the Pearson chi-square test
[χ²(394)=400.412, p=.401] and the deviance test
[χ²(394)=403.353, p=.362] were both non-significant. These
results suggest good model fit.
3/29/2020 DR ATHAR KHAN 18
Here, we have the regression coefficients and significance tests
for each of the independent variables in the model. The
regression coefficients are literally interpreted as:
The predicted change in log odds of being in a higher (as
opposed to a lower) group/category on the dependent variable
(controlling for the remaining independent variables) per unit
increase on the independent variable.
3/29/2020 DR ATHAR KHAN 19
We interpret a positive Estimate (b) in the following way:
For every one unit increase on an independent variable, there is
a predicted increase (of a certain amount) in the log odds of
falling at a higher level of the dependent variable.
More generally, this indicates that as scores increase on an
independent variable, there is an increased probability of falling
at a higher level on the dependent variable.
1=Low interest
2=Medium interest,
3=High interest
3/29/2020 DR ATHAR KHAN 20
We interpret a negative Estimate (b) in the following way:
For every one unit increase on an independent variable, there is
a predicted decrease (of a certain amount) in the log odds of
falling at a higher level of the dependent variable.
More generally, this indicates that as scores increase on an
independent variable, there is a decreased probability of falling
at a higher level on the dependent variable.
3/29/2020 DR ATHAR KHAN 21
Mastery goals was a significant positive predictor of Interest in
the next topic. For every one unit increase on mastery goals,
there is a predicted increase of .026 in the log odds of a student
being in a higher (as opposed to lower) category on Interest.
This indicates that a student scoring higher on mastery goals
were more likely to indicate greater interest in the next topic.
“Masteryg” is mastery goals (higher scores indicate greater mastery goals).
“Interestlev’ is an ordered, categorical variable indicating students’ self-
reported interest for the next topic in class. It is coded 1=low interest,
2=medium interest, 3=high interest).
3/29/2020 DR ATHAR KHAN 22
Fear of failure was not a significant predictor in the model. [The
coefficient is interpreted as follows: For every one unit increase
on fear of failure, there is a predicted decrease of .015 in the
log odds of being in a higher level of the dependent variable.]
“Fearfail” is fear of failure (higher scores indicate greater fear of failure).
“Interestlev’ is an ordered, categorical variable indicating students’ self-
reported interest for the next topic in class. It is coded 1=low interest,
2=medium interest, 3=high interest).
3/29/2020 DR ATHAR KHAN 23
Pass was a significant positive predictor of Interest. Since Pass is a
binary variable, the slope represents the difference in log odds
between individuals in the “failed” group and the “passed group”.
The log odds of being in a higher level on Interest was .820 points
higher on average for those who passed the previous subject matter
test as compared to those who failed the test.
“Pass” indicates whether a student passed (coded 1) or failed (coded 0) a
previous subject matter test.
“Interestlev’ is an ordered, categorical variable indicating students’ self-
reported interest for the next topic in class. It is coded 1=low interest,
2=medium interest, 3=high interest).
3/29/2020 DR ATHAR KHAN 24
Gender identification was not a significant predictor. [Again,
because this is a binary variable the slope can be thought of as
the difference in log odds between groups. On average, the log
odds of being in a higher Interest category was .232 points
greater for persons identified as female than males.]
“Genderid” is a binary variable (like pass), dummy coded 0=identified male,
1=identified female.
“Interestlev’ is an ordered, categorical variable indicating students’ self-
reported interest for the next topic in class. It is coded 1=low interest,
2=medium interest, 3=high interest).
3/29/2020 DR ATHAR KHAN 25
▪ Assumption of proportional odds (SPSS calls this
the assumption of parallel lines but it’s the same thing). This
assumes that the explanatory variables have the same effect
on the odds regardless of the threshold.
3/29/2020 DR ATHAR KHAN 26
▪ As mentioned previously, OLR assumes that the relationship
between the IV’s are the same “across all possible comparisons”
(Osborne, 2017, p. 147) involving the dependent variable – an
assumption referred to as Proportional Odds.
▪ When the result of the test of Parallel lines (i.e., assumption of
Proportional odds) indicate non-significance, then we interpret it
to mean that the assumption is satisfied. Statistical significance is
taken as an indicator that the assumption is not satisfied.
▪ In the results from our analysis, we interpret the results to mean
that the assumption is satisfied (as p=.854).
3/29/2020 DR ATHAR KHAN 27
3/29/2020 DR ATHAR KHAN 28
Ordinal logistic regression (using SPSS): Route 2
(using generalized linear models option)
3/29/2020 DR ATHAR KHAN 29
One downside of using the previous option is that we cannot get Odds Ratios
(OR’s), reflecting the changing odds of a case falling at a next higher level on
the dependent variable. Moreover, the test results associated with the
independent variables are based solely on the Wald test. These results can be
less powerful than test results based on the use of Likelihood ratio chi-square
tests. Using the Generalized linear models option, we can obtain all of this
additional information.
3/29/2020 DR ATHAR KHAN 30
3/29/2020 DR ATHAR KHAN 31
3/29/2020 DR ATHAR KHAN 32
If you have “factor”
variables then you could
include them in the Factors
box. Unlike Route 1, you
can actually specify the
reference category.
Include independent
variables (not treated as
factors) here
3/29/2020 DR ATHAR KHAN 33
3/29/2020 DR ATHAR KHAN 34
Here, I have requested
Likelihood ratio chi-square
statistics and odds ratios to be
printed in the output.
3/29/2020 DR ATHAR KHAN 35
These are various goodness of
fit statistics.
You’ll notice that although the
Pearson chi-square and
Deviance appear in this table,
test results are not provided (as
we saw in the Goodness of fit
table via Route 1).
Nevertheless, both values and
degrees of freedom are
provided, which could be used
to test for model fit using the
chi-square distribution. (Of
course, it’s probably less work
to obtain that information via
Route 1)
3/29/2020 DR ATHAR KHAN 36
This is the Likelihood ratio chi-square
test we saw via Route 1. We see that
our full model was a significant
improvement in fit over the null (no
predictors) model [χ²(4)=30.249,
p<.001].
3/29/2020 DR ATHAR KHAN 37
Running your logistic regression through this route will allow you to obtain
both Wald tests of the predictors (see test results under Parameter
Estimates) and Likelihood ratio tests (see Tests of Model Effects). For the
most part, the p-values from both tables are very consistent.
3/29/2020 DR ATHAR KHAN 38
A closer look at the table:
3/29/2020 DR ATHAR KHAN 39
Here, you’ll see roughly the same information contained in the previous table
of regression coefficients through Route 1. One of the main differences is the
Exp(B) column (and confidence interval). The Exp(B) column contains odds
ratios reflecting the multiplicative change in the odds of being in a higher
category on the dependent variable for every one unit increase on the
independent variable, holding the remaining independent variables constant.
An odds ratio > 1 suggests an increasing probability of being in a higher level on
the dependent variable as values on an independent variable increases,
whereas a ratio < 1 suggests a decreasing probability with increasing values on
an independent variable. An adds ratio = 1 suggests no predicted change in the
likelihood of being in a higher category as values on an independent variable
increase.
3/29/2020 DR ATHAR KHAN 40
As before, mastery goals was a significant positive predictor of Interest in the next
topic. For every one unit increase on mastery goals, there is a predicted increase of
.026 in the log odds of a student being in a higher level of the Interest (dependent)
variable. This indicates that a student scoring higher on mastery goals were more
likely to indicate greater interest in the next topic.
The odds ratio indicates that the odds of being in a higher category on Interest
increases by a factor of 1.027 for every one unit increase on mastery goals.
3/29/2020 DR ATHAR KHAN 41
Fear of failure was not a significant predictor in the model. [The regression
coefficient indicates that for every one unit increase on fear of failure, there is a
predicted decrease of .015 in the log odds of being in a higher level of the
dependent variable (controlling for the remaining predictors).]
The odds ratio indicates that the odds of being in a higher category on Interest
increases by a factor of .985 for every one unit increase on fear of failure. [Given
that the odds ratio is < 1, this indicates a decreasing probability of being in a higher
level on the Interest variable as scores increase on fear of failure.]
3/29/2020 DR ATHAR KHAN 42
▪ Pass was a significant positive predictor of Interest. The log odds of being in a
higher level on Interest was .820 points higher on average for those who passed
the previous subject matter test than those who failed the test.
▪ The odds of students who passed (the previous subject matter test) being in a
higher category on the dependent variable were 2.270 times that of those who
failed the test.
▪ Gender identification was not a significant predictor. [On average, the log odds of
being in a higher Interest category was .232 points greater for females than
males.]
▪ The odds of a student identified as female being in a higher category on the
dependent variable was 1.261 times that of a student identified as male (although
again, gender identification was not a significant predictor).
3/29/2020 DR ATHAR KHAN 43
Mike Crowson. Ordinal logistic regression using SPSS.
https://www.youtube.com/watch?v=rSCdwZD1DuM
References
3/29/2020 DR ATHAR KHAN 44

Ordinal logistic regression

  • 1.
    ORDINAL LOGISTIC REGRESSION Dr.Athar Khan matharm@yahoo.com 3/29/2020 DR ATHAR KHAN 1
  • 2.
    Ordinal logistic regression(OLR) is generally used when you have categories for the dependent variable that are ordered (i.e., are ranked). When the proportional odds assumption is violated, then MLR provides a viable alternative to OLR. The proportional odds assumption essentially states that the relationship between the independent variable and dependent variable is constant, irrespective of which groups are being compared on the dependent variable (see Osborne, 2015, 2017). Overview 3/29/2020 DR ATHAR KHAN 2
  • 3.
    ▪ Logistic Regressionis a version of multiple regression where the outcome variable is binary (dichotomous), meaning there are only two possible outcomes. The model can be used to calculate the probability of one of the two outcomes occurring over the other for a given case/observation by using the values of a set of known explanatory variables. ▪ Logits are basically transformations of existing binary outcome variable data points into a probability P (ranging from 0 to 1). ▪ A logit curve is therefore a graph of these logits plotted against an explanatory variable. Overview 3/29/2020 DR ATHAR KHAN 3
  • 4.
    ▪ -2 log-likelihood(-2LL) provides us with an indication of the total error that is in a logistic regression model. The larger the value of the -2LL the less accurate the predictions of the model Overview 3/29/2020 DR ATHAR KHAN 4
  • 5.
  • 6.
    ▪ The Logof the OR, sometimes called the logit is a mathematical transformation of the odds which will help in creating a regression model. Overview 3/29/2020 DR ATHAR KHAN 6
  • 7.
    Scenario: Let’s sayyou are a researcher studying predictors of student interest. You collect data from 200 students on several variables. INDEPENDENT VARIABLES “Pass” indicates whether a student passed (coded 1) or failed (coded 0) a previous subject matter test. “Masteryg” is mastery goals (higher scores indicate greater mastery goals). “Fearfail” is fear of failure (higher scores indicate greater fear of failure). “Masteryg” and “Fearfail” are treated as continuous variables. “Genderid” is a binary variable (like pass), dummy coded 0=identified male, 1=identified female. 3/29/2020 DR ATHAR KHAN 7
  • 8.
    Scenario: Let’s sayyou are a researcher studying predictors of student interest. You collect data from 200 students on several variables. DEPENDENT VARIABLES “Interestlev’ is an ordered, categorical variable indicating students’ self-reported interest for the next topic in class. It is coded 1=low interest, 2=medium interest, 3=high interest). 3/29/2020 DR ATHAR KHAN 8
  • 9.
    Ordinal logistic regression(using SPSS): Route 1 3/29/2020 DR ATHAR KHAN 9
  • 10.
  • 11.
    Here, we place“Interestlev” variable in the dependent box and remaining variables (IV’s) in the Covariate(s) box. Although they are categorical variables, we can include “pass” and “genderid" as covariates. However, if you have categorical variables with more than two levels, then you must use the “factor(s)” box for them. [FYI, would have also entered the above variables as factors, but I prefer having control over the designation of the reference category; SPSS defaults by treating the category with the higher value as the reference category] 3/29/2020 DR ATHAR KHAN 11
  • 12.
    Categorical (nominal orordinal) explanatory variables are entered to the Factor(s) box, so this is where we enter ethnic2 and gender. Continuous explanatory variables (in this case sec2) are entered as covariates. 3/29/2020 DR ATHAR KHAN 12
  • 13.
    Click “OUTPUT” Select “Testof parallel lines” provides a test of the proportional odds assumption. 3/29/2020 DR ATHAR KHAN 13
  • 14.
    The Case ProcessingSummary tells you the proportion of cases falling at each level of the dependent variable (Interestlev). 1=Low interest 2=Medium interest, 3=High interest 3/29/2020 DR ATHAR KHAN 14
  • 15.
    The Model FittingInformation (see right) contains the -2 Log Likelihood for: Intercept only (or null) model and the Full Model (containing the full set of predictors). We also have a likelihood ratio chi-square test to test whether there is a significant improvement in fit of the Final model relative to the Intercept only model. In this case, we see a significant improvement in fit of the Final model over the null model [χ²(4)=30.249, p<.001]. 3/29/2020 DR ATHAR KHAN 15
  • 16.
    We compare thefinal model against the baseline to see whether it has significantly improved the fit to the data. The Model fitting Information table gives the -2 log-likelihood values for the baseline and the final model, and SPSS performs a chi-square to test the difference between the -2LL for the two models. The statistically significant chi-square statistic (p<.0005) indicates that the Final model gives a significant improvement over the baseline intercept-only model. This tells you that the final model gives better predictions3/29/2020 DR ATHAR KHAN 16
  • 17.
    The “Goodness ofFit” table contains the Deviance and Pearson chi-square tests, which are useful for determining whether a model exhibits good fit to the data. Non- significant test results are indicators that the model fits the data well (Field, 2018; Petrucci, 2009). Deviance (-2LL) This is the log-likelihood multiplied by -2 and is commonly used to explore how well a logistic regression model fits the data. The lower this value is the better your model is at predicting your binary outcome variable. 3/29/2020 DR ATHAR KHAN 17
  • 18.
    In this analysis,we see that both the Pearson chi-square test [χ²(394)=400.412, p=.401] and the deviance test [χ²(394)=403.353, p=.362] were both non-significant. These results suggest good model fit. 3/29/2020 DR ATHAR KHAN 18
  • 19.
    Here, we havethe regression coefficients and significance tests for each of the independent variables in the model. The regression coefficients are literally interpreted as: The predicted change in log odds of being in a higher (as opposed to a lower) group/category on the dependent variable (controlling for the remaining independent variables) per unit increase on the independent variable. 3/29/2020 DR ATHAR KHAN 19
  • 20.
    We interpret apositive Estimate (b) in the following way: For every one unit increase on an independent variable, there is a predicted increase (of a certain amount) in the log odds of falling at a higher level of the dependent variable. More generally, this indicates that as scores increase on an independent variable, there is an increased probability of falling at a higher level on the dependent variable. 1=Low interest 2=Medium interest, 3=High interest 3/29/2020 DR ATHAR KHAN 20
  • 21.
    We interpret anegative Estimate (b) in the following way: For every one unit increase on an independent variable, there is a predicted decrease (of a certain amount) in the log odds of falling at a higher level of the dependent variable. More generally, this indicates that as scores increase on an independent variable, there is a decreased probability of falling at a higher level on the dependent variable. 3/29/2020 DR ATHAR KHAN 21
  • 22.
    Mastery goals wasa significant positive predictor of Interest in the next topic. For every one unit increase on mastery goals, there is a predicted increase of .026 in the log odds of a student being in a higher (as opposed to lower) category on Interest. This indicates that a student scoring higher on mastery goals were more likely to indicate greater interest in the next topic. “Masteryg” is mastery goals (higher scores indicate greater mastery goals). “Interestlev’ is an ordered, categorical variable indicating students’ self- reported interest for the next topic in class. It is coded 1=low interest, 2=medium interest, 3=high interest). 3/29/2020 DR ATHAR KHAN 22
  • 23.
    Fear of failurewas not a significant predictor in the model. [The coefficient is interpreted as follows: For every one unit increase on fear of failure, there is a predicted decrease of .015 in the log odds of being in a higher level of the dependent variable.] “Fearfail” is fear of failure (higher scores indicate greater fear of failure). “Interestlev’ is an ordered, categorical variable indicating students’ self- reported interest for the next topic in class. It is coded 1=low interest, 2=medium interest, 3=high interest). 3/29/2020 DR ATHAR KHAN 23
  • 24.
    Pass was asignificant positive predictor of Interest. Since Pass is a binary variable, the slope represents the difference in log odds between individuals in the “failed” group and the “passed group”. The log odds of being in a higher level on Interest was .820 points higher on average for those who passed the previous subject matter test as compared to those who failed the test. “Pass” indicates whether a student passed (coded 1) or failed (coded 0) a previous subject matter test. “Interestlev’ is an ordered, categorical variable indicating students’ self- reported interest for the next topic in class. It is coded 1=low interest, 2=medium interest, 3=high interest). 3/29/2020 DR ATHAR KHAN 24
  • 25.
    Gender identification wasnot a significant predictor. [Again, because this is a binary variable the slope can be thought of as the difference in log odds between groups. On average, the log odds of being in a higher Interest category was .232 points greater for persons identified as female than males.] “Genderid” is a binary variable (like pass), dummy coded 0=identified male, 1=identified female. “Interestlev’ is an ordered, categorical variable indicating students’ self- reported interest for the next topic in class. It is coded 1=low interest, 2=medium interest, 3=high interest). 3/29/2020 DR ATHAR KHAN 25
  • 26.
    ▪ Assumption ofproportional odds (SPSS calls this the assumption of parallel lines but it’s the same thing). This assumes that the explanatory variables have the same effect on the odds regardless of the threshold. 3/29/2020 DR ATHAR KHAN 26
  • 27.
    ▪ As mentionedpreviously, OLR assumes that the relationship between the IV’s are the same “across all possible comparisons” (Osborne, 2017, p. 147) involving the dependent variable – an assumption referred to as Proportional Odds. ▪ When the result of the test of Parallel lines (i.e., assumption of Proportional odds) indicate non-significance, then we interpret it to mean that the assumption is satisfied. Statistical significance is taken as an indicator that the assumption is not satisfied. ▪ In the results from our analysis, we interpret the results to mean that the assumption is satisfied (as p=.854). 3/29/2020 DR ATHAR KHAN 27
  • 28.
  • 29.
    Ordinal logistic regression(using SPSS): Route 2 (using generalized linear models option) 3/29/2020 DR ATHAR KHAN 29
  • 30.
    One downside ofusing the previous option is that we cannot get Odds Ratios (OR’s), reflecting the changing odds of a case falling at a next higher level on the dependent variable. Moreover, the test results associated with the independent variables are based solely on the Wald test. These results can be less powerful than test results based on the use of Likelihood ratio chi-square tests. Using the Generalized linear models option, we can obtain all of this additional information. 3/29/2020 DR ATHAR KHAN 30
  • 31.
  • 32.
  • 33.
    If you have“factor” variables then you could include them in the Factors box. Unlike Route 1, you can actually specify the reference category. Include independent variables (not treated as factors) here 3/29/2020 DR ATHAR KHAN 33
  • 34.
  • 35.
    Here, I haverequested Likelihood ratio chi-square statistics and odds ratios to be printed in the output. 3/29/2020 DR ATHAR KHAN 35
  • 36.
    These are variousgoodness of fit statistics. You’ll notice that although the Pearson chi-square and Deviance appear in this table, test results are not provided (as we saw in the Goodness of fit table via Route 1). Nevertheless, both values and degrees of freedom are provided, which could be used to test for model fit using the chi-square distribution. (Of course, it’s probably less work to obtain that information via Route 1) 3/29/2020 DR ATHAR KHAN 36
  • 37.
    This is theLikelihood ratio chi-square test we saw via Route 1. We see that our full model was a significant improvement in fit over the null (no predictors) model [χ²(4)=30.249, p<.001]. 3/29/2020 DR ATHAR KHAN 37
  • 38.
    Running your logisticregression through this route will allow you to obtain both Wald tests of the predictors (see test results under Parameter Estimates) and Likelihood ratio tests (see Tests of Model Effects). For the most part, the p-values from both tables are very consistent. 3/29/2020 DR ATHAR KHAN 38
  • 39.
    A closer lookat the table: 3/29/2020 DR ATHAR KHAN 39
  • 40.
    Here, you’ll seeroughly the same information contained in the previous table of regression coefficients through Route 1. One of the main differences is the Exp(B) column (and confidence interval). The Exp(B) column contains odds ratios reflecting the multiplicative change in the odds of being in a higher category on the dependent variable for every one unit increase on the independent variable, holding the remaining independent variables constant. An odds ratio > 1 suggests an increasing probability of being in a higher level on the dependent variable as values on an independent variable increases, whereas a ratio < 1 suggests a decreasing probability with increasing values on an independent variable. An adds ratio = 1 suggests no predicted change in the likelihood of being in a higher category as values on an independent variable increase. 3/29/2020 DR ATHAR KHAN 40
  • 41.
    As before, masterygoals was a significant positive predictor of Interest in the next topic. For every one unit increase on mastery goals, there is a predicted increase of .026 in the log odds of a student being in a higher level of the Interest (dependent) variable. This indicates that a student scoring higher on mastery goals were more likely to indicate greater interest in the next topic. The odds ratio indicates that the odds of being in a higher category on Interest increases by a factor of 1.027 for every one unit increase on mastery goals. 3/29/2020 DR ATHAR KHAN 41
  • 42.
    Fear of failurewas not a significant predictor in the model. [The regression coefficient indicates that for every one unit increase on fear of failure, there is a predicted decrease of .015 in the log odds of being in a higher level of the dependent variable (controlling for the remaining predictors).] The odds ratio indicates that the odds of being in a higher category on Interest increases by a factor of .985 for every one unit increase on fear of failure. [Given that the odds ratio is < 1, this indicates a decreasing probability of being in a higher level on the Interest variable as scores increase on fear of failure.] 3/29/2020 DR ATHAR KHAN 42
  • 43.
    ▪ Pass wasa significant positive predictor of Interest. The log odds of being in a higher level on Interest was .820 points higher on average for those who passed the previous subject matter test than those who failed the test. ▪ The odds of students who passed (the previous subject matter test) being in a higher category on the dependent variable were 2.270 times that of those who failed the test. ▪ Gender identification was not a significant predictor. [On average, the log odds of being in a higher Interest category was .232 points greater for females than males.] ▪ The odds of a student identified as female being in a higher category on the dependent variable was 1.261 times that of a student identified as male (although again, gender identification was not a significant predictor). 3/29/2020 DR ATHAR KHAN 43
  • 44.
    Mike Crowson. Ordinallogistic regression using SPSS. https://www.youtube.com/watch?v=rSCdwZD1DuM References 3/29/2020 DR ATHAR KHAN 44