1. Analysis of Lafayette College Course Evaluations
By Bruce Keller and JJ Wanda
10/20/2014
2. Introduction
The creation, implementation, and interpretations of course evaluations consume a
considerable amount of resources, and the validity of these tests should be assessed to determine
how beneficial they truly are and how heavily they should weigh on policy decisions.
Our study is to determine how well items on course evaluations at Lafayette College
predict the overall self report value of the course. Students tend to be relatively unbiased in their
ratings, using consistent weighing for course content across different professors and classes
(Broder and Dorfman, 1994); therefore self report information from Lafayette course evaluations
should be a valid measure of course characteristics such as enthusiasm of the instructor and
course organization for our parameters.
In addition, our study also utilizes a self report item on course evaluations asking the
student what they felt the value of the course was as the dependent variable. One may think that
students will rate a less rigorous course more favorably and vice-versa due to their enjoyment of
the course and not the value of it. Fortunately, this seems not to be the case. The same study
found that approximately 80% of explained variance in teacher quality is related to enjoyment of
the learning process and that over 90% of the explained variance for course value is due to the
quantity and quality of material learned in the course. This research would indicate that students’
enjoyment of the course is paramount in fostering learning, and that they rate courses more
favorably based on how much they learned.
Our study aims to use the most widely supported factors such as amount of course
content, as well as introducing a new factor that has received little coverage, availability of extra
help (Broder and Dorfman, 1994), while omitting ones that have shown little support such as
3. teaching experience (Harris and Sass, 2003).
Theoretical Analysis
+ + + +
Yi = β0 + β1 CCi + β2 TMi + β3 Ei + β5 EHi + εi
How do different class characteristics influence overall course ratings?
Independent Variable
Yi = Mean Score for Overall Value of the ith section
CCi = Mean Score for Course Content for the ith section.
TMi = Mean Score for Effectiveness of Teaching Material for the ith Section.
Ei = Mean Score for Instructors Use of Examples for the ith Section.
EHi = Mean Score of availability of Extra Help for the ith Section.
4. Description of Data
We collected the data using the Lafayette College course evaluation website1. We created
a program to parse all of our data from the website into a file that could be opened with excel.
The website hosts about 15 years of course evaluations. Our criteria for choosing a specific year
is based on the amount of questions on the evaluations and the amount of courses. Using this
criteria, we selected the Fall 2005 semester because it had the largest number of sections with a
format the included more items. The selection of all courses gave us a sample size of 468 courses
and a selection of 26 questions. We treated sections as separate courses even they were was
taught by the same teacher. The evaluations were on a 5 point Likert Scale with 1 being very
poor and 5 being excellent. For each sample, we used the mean of the individual evaluations for
a class. Using the 468 points in our evaluation we calculated the mean and standard deviation for
each point.
Mean Stdev
Y 3.73 0.56
CC 4.01 0.68
TM 3.80 0.63
E 3.82 0.65
EH 3.79 0.69
1 https://fac-eval.lafayette.edu
5. Regression Estimates
Table 1 Descriptive
Statistics
Variable Model 1 Model 2
CC
0.2626
(.0415)
0.3037
(.0410)
TM
0.1398
(.0392)
0.2179
(.0347)
E
0.2541
(.0379)
0.2649
(.0384)
EH
0.1383
(.0340) --
Constant
0.65
(.0881)
0.6713
(.0894)
Adjusted R-squared 0.7347 0.7257
The table above shows the coefficients, the t-value, and the R2 value. We created two
models, one with extra help and one without. In our research no studies included a variable
similar to extra help. When we removed it our adjusted r-squared was slightly less and the values
did not change in significance. As a result of these findings, we selected model one as the correct
model. To confirm that we should use a linear model, we used a scatter plot to see if our data
created any trend other than linear.
6. The graph supports a high R2 since it shows the positive correlation between each
exogenous variable and the endogenous overall course value scores. Using only this linear
model we can determine if we can reject the null hypothesis.
CC - H0: β0 < 0; Ha: β0 > 0
TM - H0: β1 < 0; Ha: β1 > 0
E - H0: β2 < 0; Ha: β2 > 0
EH - H0: β3 < 0; Ha: β3 > 0
7. Previous research found positive coefficients for course content (CC), teaching material
(TM), and use of examples (E); or equivalent variables. Extra help (EH) has not been well tested
and we predict it will have a positive coefficient. We rejected the null hypothesis for each
variable with 99.9% confidence. We predicted each coefficient to be a positive value and our
results match that prediction.
Empirical Results
Our results match our expectations. All factors incorporated into our model were
significant at the 0.01 level. Course content (CC), teaching material (TM), and use of examples
(E) had high coefficients, that are consistent with previous research (Broder and Dorfman, 1994).
Extra Help (EH) although overlooked by other research had a substantial coefficient of
approximately 0.14. Overall our model explains approximately 74% of explained variance in
our model.
Conclusion
Our model had a lot of explanatory power (74%), even more than other studies tended to
find (61%) despite using almost all the same substantial variables (significant with a coefficient
> 0.01. This may indicate variance between different colleges. Future research can, predict
variance between these supported factors in course evaluations across institutions.
8. Works Cited
Broder, J., & Dorfman, J. (1994). Determinants of Teaching Quality:
What's Important to Students. Research in Higher Education, 235-248.
Harris, D., & Sass, T. (2003). Teacher training, teacher quality and
student achievement. Journal of Public Economics, 798-812.