SlideShare a Scribd company logo
The Effect of Pre-Lecture Preparation Time on Professors’ CAPE Score
Irvin Lan*
University of California, San Diego
Econ 120BH
March 2016
Keywords: CAPE score, Lecture preparation time, Average grade expected, Cape evaluations.
*I wish to thank Professor Berman for his helpful advice and guidance throughout the entire process of writing this
paper and to Pablo Ruiz Junco and Ying Jenny Feng for their meaningful comments. Additional thanks to the professors of
UCSD for their participation in my survey, without whom this paper would not have been possible.
1
1 Introduction
In a University as large as UCSD, students learn from professors from all varieties of teaching styles.
Some professors convey knowledge through PowerPoint while others illustrate concepts with chalk and
the traditional lecture format. Some of these lectures are a thrill and students feel stimulated while other
lectures are monotonous and leave students wondering if attending was worth the opportunity-cost.
Upon quarter’s end UCSD students review their professors through CAPE and professors receive
student recommendation ratings generated by CAPE evaluation results. Seemingly these ratings are at
the discretion of students enrolled in a class, but I wonder if in fact professors are able to influence the
scores which they receive. I hypothesize that student perception and feedback is only one side of the
coin, that professor characteristics play a significant role and therefore is essential to paint the complete
picture behind each CAPE score. In this paper I look at statistics from 127 courses taught by 85 UCSD
undergraduate professors during fall quarter 2015 to see if the time that a professor spends preparing
before lecture has a statistically significant effect on the CAPE scores that they receive.
2 Theory
CAPE Score = β0 + β1LecturePrepTime + β2PrepTimeSquared + β3AvgGradeExpected + β4CapeEval +
β5StudyHours +β6YearsTeachingUCSD + β7AssocProfessor + β8Professor + ε
In this paper, a best linear predictor is used to illustrate the relationship between the dependent
variable CAPE Score and professor characteristics. LecturePrepTime is an independent variable for
the amount of time that it took for a professor to prepare prior to giving a lecture during Fall Quarter
2015. In addition, to account for possible diminishing effects of lecture preparation time the variable
PrepTimeSquared is included. The variable AvgGradeExpected is the average grade that students
expect to receive from a professor and is used as a proxy variable for how closely exams and
assignments are written to complement lecture materials, implying a professor’s ability to gauge student
2
learning. CapeEval is a variable for the number of CAPE evaluations made in each class and is used as
a proxy to control for class size. StudyHours is a variable for student study hours per week that is
related to the amount of prep time involved in a lecture and also is associated with the dependent
variable CAPE Score. An important characteristic of UCSD professors is the number of years that they
have taught at UCSD, YearsTeachingUCSD. AssocProfessor is a binary variable that will take on
value 1 if the professor holds the academic rank of associate professor or 0 if they hold the rank of
lecturer. Similarly, Professor is a binary variable that will take on value 1 if the professor holds the rank
of professor or 0 if they hold the rank of lecturer.
On the assumption that more time spent preparing for a lecture is likely to yield better results, β1
is likely positive. β2 is negative since there may be diminishing marginal benefits to additional hours of
preparation time. β3 is likely positive since higher grade expectation suggests better assessment and
understanding of student learning ability, and therefore a higher CAPE Score. We can also expect β4 to
be positive since professors who lecture larger classes tend also to prepare more before lecture, and we
can expect the two variables to move in the same direction with a positive effect on CAPE Score. β5 is
likely to be negative since students have to spend more time studying materials on their own if
professors are not well prepared. β6 is expected to be positive on the assumption that professors who
have taught many years at UCSD require less lecture preparation time and are more adept at leading a
positive lecture. Because of the possible effect of reputation, one would expect β7 and β8 to be positive. It
is also possible that the signs and magnitudes of the coefficients on professor title are not statistically
significant.
3
3 Data
The data on the 127 classes taught by 85 professors are from UCSD Fall Quarter 2015. To collect data
on professors, I created an online survey using Google Forms to extract two key professor
characteristics, Experience Teaching at UCSD and Lecture Preparation Time. The results from the
survey were then transferred to an Excel spreadsheet. The following is a link to my survey:
https://docs.google.com/forms/d/1JTJ2wdeSPLGW0cJVWNPK7YRi6YWyhvJNaeFDXho0oUg/viewfo
rm?usp=send_form.
This link was sent via email to professors who taught at UCSD during fall 2015. The professor
names provided in the survey made it possible to collect additional corresponding CAPE data from the
CAPE website https://cape.ucsd.edu/responses/Results.aspx. The website provided data on CAPE Score
(instructor recommendation), CAPE Evaluations Made, Study Hours/Week, and Average Grade
Expected. I then used blink.ucsd.edu to search for professor titles and referenced department websites
when a professor could not be located on the UCSD employee database.
Below is the table of means:
Table 1. Summary Statistics
Variable Observations Mean Std. Dev. Min. Max.
CAPE Score 127 88.58606 13.23671 33.3 100
Lecture Preparation Time (Hrs) 127 3.45248 2.480102 0.25 13
Average Grade Expected 127 3.386425 0.279747 2.5 4
Experience Teaching at UCSD (Yrs) 127 10.06031 10.45134 0.33 50
CAPE Evaluations Made 127 72.66929 73.15064 2 318
Study Hours/Wk 127 6.271811 1.827642 2.5 12.9
Associate Professor 127 0.314961 0.466340 0 1
Professor 127 0.338583 0.475102 0 1
Lecturer 127 0.354331 0.480204 0 1
4
4 Results
Table 2. Results of Regression of CAPE Score on Classroom and Professor Characteristics
Dependent variable: CAPE Score
Regressor (1) (2) (3) (4) (5)
LecturePrepTime 0.138 4.940** 4.310** 4.039** 4.245**
(0.795) (2.064) (1.928) (1.906) (1.852)
PrepTimeSquared -0.472** -0.418** -0.399* -0.414**
(0.226) (0.209) (0.206) (0.198)
AvgGradeExpected 17.82*** 17.00*** 17.80***
(3.408) (3.913) (3.841)
CapeEval 0.011 0.012
(0.013) (0.013)
StudyHours -0.511 -0.377
(0.677) (0.655)
YearsTeachingUCSD 0.021
(0.114)
AssocProfessor -1.949
(2.687)
Professor 2.226
(2.769)
Constant 88.11*** 80.22*** 20.90* 26.70* 22.28
(2.732) (3.682) (11.70) (16.10) (15.78)
Summary Statistics
Observations 127 127 127 127 127
R-Squared 0.001 0.090 0.230 0.238 0.256
RMSE 13.285 12.731 11.757 11.789 11.797
Robust standard errors in parenthesis
***p<0.01, **p<0.05, *0.1
From each of the regressions performed on the data, the results show that by increasing
preparation time, it is possible to increase CAPE score, keeping all other factors constant. In addition,
there are diminishing marginal returns to preparation. The first order derivative of the long regression
indicates that 5.127 hours is the amount of time predicted to maximize CAPE Score and increasing
5
preparation time beyond that will likely not yield significant benefits. The downward sloping portion of
the quadratic curve is not used for prediction since it is not covered by much actual observed data. This
treatment seems reasonable since a professor who prepares 6.127 hours is expected to present a lecture
at least as well as a professor who prepares an hour less, all else constant. Notice also that the
coefficients from the short regression (2) are positively biased as they include the effect that variables
like Average Grade Expected have on improving CAPE Score. Thus in the long regression (5), by
including endogenous variables, the coefficient on Lecture Preparation Time decreases from 4.940 to
4.245 at 5% significance and we are able to relieve some of the issues that arise from omitted variable
bias.
6
Another noteworthy result is the dependence of CAPE Score on the average grade expected in a
class. On average, a 0.25 increase in expected grade distribution is associated with a 4.45 point increase
in CAPE Score, keeping all other variables constant. The results show that we reject the null that
average grade expected has no effect on CAPE Score at the 1% level of significance. Classes with
higher grade distributions are more likely to have higher CAPE Scores. The following scatterplot
illustrates the positive relationship between CAPE Score and Average Grade Expected.
From the results we see that the variables Lecture Preparation Time and Average Grade
Expected are highly significant. Though the other variables are not jointly significant with an
F-statistic of 1.03 they are still important to keep in the model as they are associated with both lecture
preparation time and CAPE Score. One reason for including years taught at UCSD is that professors
7
with more experience might have been teaching long enough such that they don’t need to prepare as
much as professors who are new, so it is important to control to the extent that experience affects CAPE
Scores. Furthermore, note the change in the significance of the constant term from the short regression
to long regression. In the short regression the constant term is significant at the 1% level which suggests
that there are endogenous variables in the error term that have not been identified and in the long
regression the constant term becomes insignificant with a t-statistic of just 1.41, as relevant variables are
added. In addition the standard error of regression (RMSE) decreases from 13.285 to 11.797 which
indicates that the typical deviation from the predicted value of each CAPE score is about 11.797 and is a
reasonably good fit for the model.
5 Conclusion
The results support my hypotheses and shows that in fact preparation time and the CAPE Score received
by professors are related. At 5% significance level, increasing preparation time before a lecture, up to
5.127 hours, is associated with having a positive effect on CAPE Score. This makes sense since by
preparing more, holding all other variables constant, professors are putting more thought into the lesson
which yields better results.
Another important factor that influences CAPE Score is the grade that students expect to receive
from their professors. At 1% significance level, we can expect high CAPE Scores to be associated with
high average expected grades. A possible explanation is that difficult tests do not resemble material
presented in lecture and homework assignments and may lead students to turn against the professor
during CAPE evaluations. Conversely, a professor who tests on material that they allow students to
practice in homework and during lecture are more likely to receive high CAPE Scores.
I would like to estimate the linear causal effect that Lecture Preparation Time has on CAPE
Score but a weakness of this model is that it does not account for endogenous variables in the error term
8
that are difficult to observe such as professor ability, resulting in omitted variable bias and a poor
estimation of the coefficient of interest. Ability includes factors such as how well professors are able to
communicate with students and their motivation to help students learn. For example some professors
who are less stimulated by the course material may spend less effort providing thoughtful intuition to
students. Some professors could have more energy in the way they speak that affects how well students
are able to connect and learn from them. In a future project, panel data can be used which will allow for
the addition of professor fixed effects to control for time-invariant professor characteristics such as
innate teaching ability or motivation. Professor fixed effects may help to explain the data from the
professors who prepare less but still get high CAPE Scores.
The findings from my analysis are interesting as they indicate that professors likely have some
control over CAPE scores they receive. Though each professor has their own unique teaching style,
preparedness is universal and indeed better prepared instructors are recognized by students for their
dedication to providing quality education.

More Related Content

What's hot

Effects Of Spacing And Mixing Practice Problems
Effects Of Spacing And Mixing Practice ProblemsEffects Of Spacing And Mixing Practice Problems
Effects Of Spacing And Mixing Practice Problems
Iinternational Program School
 
Language admin
Language  adminLanguage  admin
Language admin
Nenna Adjha
 
Pgptp session 2
Pgptp session 2Pgptp session 2
Pgptp session 2
Sidwell Friends School
 
IPCRF for Teacher I-III from RPMS Manual 2018
IPCRF for Teacher I-III from RPMS Manual 2018IPCRF for Teacher I-III from RPMS Manual 2018
IPCRF for Teacher I-III from RPMS Manual 2018
Allan Roloma
 
Framework for teaching evaluation instrument. 2013 edition
Framework for teaching evaluation instrument. 2013 editionFramework for teaching evaluation instrument. 2013 edition
Framework for teaching evaluation instrument. 2013 edition
Rafael Mireles
 
Sse workshop 2 spring 2014
Sse workshop 2 spring 2014Sse workshop 2 spring 2014
Sse workshop 2 spring 2014
Martin Brown
 
M4
M4M4
Chapter 4 testing aima
Chapter 4 testing aimaChapter 4 testing aima
Chapter 4 testing aima
Aimz Crisostomo
 
[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19
[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19
[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19
JulieBethReyno1
 
IPCF Portfolio 2021-2022
IPCF Portfolio 2021-2022IPCF Portfolio 2021-2022
IPCF Portfolio 2021-2022
Gie Escoto
 
[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...
[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...
[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...
GlennOcampo
 

What's hot (11)

Effects Of Spacing And Mixing Practice Problems
Effects Of Spacing And Mixing Practice ProblemsEffects Of Spacing And Mixing Practice Problems
Effects Of Spacing And Mixing Practice Problems
 
Language admin
Language  adminLanguage  admin
Language admin
 
Pgptp session 2
Pgptp session 2Pgptp session 2
Pgptp session 2
 
IPCRF for Teacher I-III from RPMS Manual 2018
IPCRF for Teacher I-III from RPMS Manual 2018IPCRF for Teacher I-III from RPMS Manual 2018
IPCRF for Teacher I-III from RPMS Manual 2018
 
Framework for teaching evaluation instrument. 2013 edition
Framework for teaching evaluation instrument. 2013 editionFramework for teaching evaluation instrument. 2013 edition
Framework for teaching evaluation instrument. 2013 edition
 
Sse workshop 2 spring 2014
Sse workshop 2 spring 2014Sse workshop 2 spring 2014
Sse workshop 2 spring 2014
 
M4
M4M4
M4
 
Chapter 4 testing aima
Chapter 4 testing aimaChapter 4 testing aima
Chapter 4 testing aima
 
[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19
[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19
[Appendix 1] rpms tool for t i iii sy 2020-2021 in the time of covid-19
 
IPCF Portfolio 2021-2022
IPCF Portfolio 2021-2022IPCF Portfolio 2021-2022
IPCF Portfolio 2021-2022
 
[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...
[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...
[Appendix 1 a] rpms tool for proficient teachers sy 2021 2022 in the time of ...
 

Similar to 120BH_VSP_V3_20160310

Using lab exams to ensure programming practice in an introductory programming...
Using lab exams to ensure programming practice in an introductory programming...Using lab exams to ensure programming practice in an introductory programming...
Using lab exams to ensure programming practice in an introductory programming...
Luis Estevens
 
A QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docx
A QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docxA QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docx
A QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docx
ResearchWap
 
Ets caveat emptor
Ets caveat emptorEts caveat emptor
Ets caveat emptor
Jennifer Evans
 
Skills, Understanding and Attitudes
Skills, Understanding and AttitudesSkills, Understanding and Attitudes
Skills, Understanding and Attitudes
noblex1
 
LearningBrief 38 final
LearningBrief 38 finalLearningBrief 38 final
LearningBrief 38 final
Roelien Herholdt
 
Continuous Assessment/Testing Guidelines Summary
Continuous Assessment/Testing Guidelines SummaryContinuous Assessment/Testing Guidelines Summary
Continuous Assessment/Testing Guidelines Summary
Manuel Reyes
 
Continuous asessment / Testing guidelines summary
Continuous asessment / Testing guidelines summaryContinuous asessment / Testing guidelines summary
Continuous asessment / Testing guidelines summary
Manolo05
 
FINAL PRESENTATION on OBE using TDD.pptx
FINAL PRESENTATION on OBE using TDD.pptxFINAL PRESENTATION on OBE using TDD.pptx
FINAL PRESENTATION on OBE using TDD.pptx
gznfrch1
 
Assessment Report 2013-2015.pdf
Assessment Report 2013-2015.pdfAssessment Report 2013-2015.pdf
Assessment Report 2013-2015.pdf
ssuser3f08c81
 
ADMINISTRATION SCORING AND REPORTING.pdf
ADMINISTRATION  SCORING AND REPORTING.pdfADMINISTRATION  SCORING AND REPORTING.pdf
ADMINISTRATION SCORING AND REPORTING.pdf
OM VERMA
 
Knowledge, Process, Understanding, Product/Performance
Knowledge, Process, Understanding, Product/PerformanceKnowledge, Process, Understanding, Product/Performance
Knowledge, Process, Understanding, Product/Performance
Kristine Barredo
 
Al Nat Conf Assessment Topic
Al Nat Conf Assessment TopicAl Nat Conf Assessment Topic
Al Nat Conf Assessment Topic
grainne
 
Assessment notes
Assessment notesAssessment notes
Assessment notes
grainne
 
2013 mat hassan & talib
2013 mat hassan & talib2013 mat hassan & talib
2013 mat hassan & talib
Nadzirah Bazlaa' Kamaruzzamri
 
Criterion.docx
Criterion.docxCriterion.docx
Criterion.docx
primalee encarnacion
 
Sped clinical practice mandatory meeting 10 11
Sped clinical practice mandatory meeting 10 11Sped clinical practice mandatory meeting 10 11
Sped clinical practice mandatory meeting 10 11
specialedAPU
 
qep-first-year-impact-report
qep-first-year-impact-reportqep-first-year-impact-report
qep-first-year-impact-report
Kerri A. Mercer
 
Paper Saarmste2
Paper Saarmste2Paper Saarmste2
Paper Saarmste2
Ariellah Rosenberg
 
Understanding Assessment
Understanding AssessmentUnderstanding Assessment
Understanding Assessment
darrend
 
Girlie School-based-Research-Proposal.pptx
Girlie School-based-Research-Proposal.pptxGirlie School-based-Research-Proposal.pptx
Girlie School-based-Research-Proposal.pptx
GirlieAbejo
 

Similar to 120BH_VSP_V3_20160310 (20)

Using lab exams to ensure programming practice in an introductory programming...
Using lab exams to ensure programming practice in an introductory programming...Using lab exams to ensure programming practice in an introductory programming...
Using lab exams to ensure programming practice in an introductory programming...
 
A QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docx
A QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docxA QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docx
A QUALITATIVE ASSESSMENT OF ENGLISH LANGUAGE TEACHER.docx
 
Ets caveat emptor
Ets caveat emptorEts caveat emptor
Ets caveat emptor
 
Skills, Understanding and Attitudes
Skills, Understanding and AttitudesSkills, Understanding and Attitudes
Skills, Understanding and Attitudes
 
LearningBrief 38 final
LearningBrief 38 finalLearningBrief 38 final
LearningBrief 38 final
 
Continuous Assessment/Testing Guidelines Summary
Continuous Assessment/Testing Guidelines SummaryContinuous Assessment/Testing Guidelines Summary
Continuous Assessment/Testing Guidelines Summary
 
Continuous asessment / Testing guidelines summary
Continuous asessment / Testing guidelines summaryContinuous asessment / Testing guidelines summary
Continuous asessment / Testing guidelines summary
 
FINAL PRESENTATION on OBE using TDD.pptx
FINAL PRESENTATION on OBE using TDD.pptxFINAL PRESENTATION on OBE using TDD.pptx
FINAL PRESENTATION on OBE using TDD.pptx
 
Assessment Report 2013-2015.pdf
Assessment Report 2013-2015.pdfAssessment Report 2013-2015.pdf
Assessment Report 2013-2015.pdf
 
ADMINISTRATION SCORING AND REPORTING.pdf
ADMINISTRATION  SCORING AND REPORTING.pdfADMINISTRATION  SCORING AND REPORTING.pdf
ADMINISTRATION SCORING AND REPORTING.pdf
 
Knowledge, Process, Understanding, Product/Performance
Knowledge, Process, Understanding, Product/PerformanceKnowledge, Process, Understanding, Product/Performance
Knowledge, Process, Understanding, Product/Performance
 
Al Nat Conf Assessment Topic
Al Nat Conf Assessment TopicAl Nat Conf Assessment Topic
Al Nat Conf Assessment Topic
 
Assessment notes
Assessment notesAssessment notes
Assessment notes
 
2013 mat hassan & talib
2013 mat hassan & talib2013 mat hassan & talib
2013 mat hassan & talib
 
Criterion.docx
Criterion.docxCriterion.docx
Criterion.docx
 
Sped clinical practice mandatory meeting 10 11
Sped clinical practice mandatory meeting 10 11Sped clinical practice mandatory meeting 10 11
Sped clinical practice mandatory meeting 10 11
 
qep-first-year-impact-report
qep-first-year-impact-reportqep-first-year-impact-report
qep-first-year-impact-report
 
Paper Saarmste2
Paper Saarmste2Paper Saarmste2
Paper Saarmste2
 
Understanding Assessment
Understanding AssessmentUnderstanding Assessment
Understanding Assessment
 
Girlie School-based-Research-Proposal.pptx
Girlie School-based-Research-Proposal.pptxGirlie School-based-Research-Proposal.pptx
Girlie School-based-Research-Proposal.pptx
 

120BH_VSP_V3_20160310

  • 1. The Effect of Pre-Lecture Preparation Time on Professors’ CAPE Score Irvin Lan* University of California, San Diego Econ 120BH March 2016 Keywords: CAPE score, Lecture preparation time, Average grade expected, Cape evaluations. *I wish to thank Professor Berman for his helpful advice and guidance throughout the entire process of writing this paper and to Pablo Ruiz Junco and Ying Jenny Feng for their meaningful comments. Additional thanks to the professors of UCSD for their participation in my survey, without whom this paper would not have been possible.
  • 2. 1 1 Introduction In a University as large as UCSD, students learn from professors from all varieties of teaching styles. Some professors convey knowledge through PowerPoint while others illustrate concepts with chalk and the traditional lecture format. Some of these lectures are a thrill and students feel stimulated while other lectures are monotonous and leave students wondering if attending was worth the opportunity-cost. Upon quarter’s end UCSD students review their professors through CAPE and professors receive student recommendation ratings generated by CAPE evaluation results. Seemingly these ratings are at the discretion of students enrolled in a class, but I wonder if in fact professors are able to influence the scores which they receive. I hypothesize that student perception and feedback is only one side of the coin, that professor characteristics play a significant role and therefore is essential to paint the complete picture behind each CAPE score. In this paper I look at statistics from 127 courses taught by 85 UCSD undergraduate professors during fall quarter 2015 to see if the time that a professor spends preparing before lecture has a statistically significant effect on the CAPE scores that they receive. 2 Theory CAPE Score = β0 + β1LecturePrepTime + β2PrepTimeSquared + β3AvgGradeExpected + β4CapeEval + β5StudyHours +β6YearsTeachingUCSD + β7AssocProfessor + β8Professor + ε In this paper, a best linear predictor is used to illustrate the relationship between the dependent variable CAPE Score and professor characteristics. LecturePrepTime is an independent variable for the amount of time that it took for a professor to prepare prior to giving a lecture during Fall Quarter 2015. In addition, to account for possible diminishing effects of lecture preparation time the variable PrepTimeSquared is included. The variable AvgGradeExpected is the average grade that students expect to receive from a professor and is used as a proxy variable for how closely exams and assignments are written to complement lecture materials, implying a professor’s ability to gauge student
  • 3. 2 learning. CapeEval is a variable for the number of CAPE evaluations made in each class and is used as a proxy to control for class size. StudyHours is a variable for student study hours per week that is related to the amount of prep time involved in a lecture and also is associated with the dependent variable CAPE Score. An important characteristic of UCSD professors is the number of years that they have taught at UCSD, YearsTeachingUCSD. AssocProfessor is a binary variable that will take on value 1 if the professor holds the academic rank of associate professor or 0 if they hold the rank of lecturer. Similarly, Professor is a binary variable that will take on value 1 if the professor holds the rank of professor or 0 if they hold the rank of lecturer. On the assumption that more time spent preparing for a lecture is likely to yield better results, β1 is likely positive. β2 is negative since there may be diminishing marginal benefits to additional hours of preparation time. β3 is likely positive since higher grade expectation suggests better assessment and understanding of student learning ability, and therefore a higher CAPE Score. We can also expect β4 to be positive since professors who lecture larger classes tend also to prepare more before lecture, and we can expect the two variables to move in the same direction with a positive effect on CAPE Score. β5 is likely to be negative since students have to spend more time studying materials on their own if professors are not well prepared. β6 is expected to be positive on the assumption that professors who have taught many years at UCSD require less lecture preparation time and are more adept at leading a positive lecture. Because of the possible effect of reputation, one would expect β7 and β8 to be positive. It is also possible that the signs and magnitudes of the coefficients on professor title are not statistically significant.
  • 4. 3 3 Data The data on the 127 classes taught by 85 professors are from UCSD Fall Quarter 2015. To collect data on professors, I created an online survey using Google Forms to extract two key professor characteristics, Experience Teaching at UCSD and Lecture Preparation Time. The results from the survey were then transferred to an Excel spreadsheet. The following is a link to my survey: https://docs.google.com/forms/d/1JTJ2wdeSPLGW0cJVWNPK7YRi6YWyhvJNaeFDXho0oUg/viewfo rm?usp=send_form. This link was sent via email to professors who taught at UCSD during fall 2015. The professor names provided in the survey made it possible to collect additional corresponding CAPE data from the CAPE website https://cape.ucsd.edu/responses/Results.aspx. The website provided data on CAPE Score (instructor recommendation), CAPE Evaluations Made, Study Hours/Week, and Average Grade Expected. I then used blink.ucsd.edu to search for professor titles and referenced department websites when a professor could not be located on the UCSD employee database. Below is the table of means: Table 1. Summary Statistics Variable Observations Mean Std. Dev. Min. Max. CAPE Score 127 88.58606 13.23671 33.3 100 Lecture Preparation Time (Hrs) 127 3.45248 2.480102 0.25 13 Average Grade Expected 127 3.386425 0.279747 2.5 4 Experience Teaching at UCSD (Yrs) 127 10.06031 10.45134 0.33 50 CAPE Evaluations Made 127 72.66929 73.15064 2 318 Study Hours/Wk 127 6.271811 1.827642 2.5 12.9 Associate Professor 127 0.314961 0.466340 0 1 Professor 127 0.338583 0.475102 0 1 Lecturer 127 0.354331 0.480204 0 1
  • 5. 4 4 Results Table 2. Results of Regression of CAPE Score on Classroom and Professor Characteristics Dependent variable: CAPE Score Regressor (1) (2) (3) (4) (5) LecturePrepTime 0.138 4.940** 4.310** 4.039** 4.245** (0.795) (2.064) (1.928) (1.906) (1.852) PrepTimeSquared -0.472** -0.418** -0.399* -0.414** (0.226) (0.209) (0.206) (0.198) AvgGradeExpected 17.82*** 17.00*** 17.80*** (3.408) (3.913) (3.841) CapeEval 0.011 0.012 (0.013) (0.013) StudyHours -0.511 -0.377 (0.677) (0.655) YearsTeachingUCSD 0.021 (0.114) AssocProfessor -1.949 (2.687) Professor 2.226 (2.769) Constant 88.11*** 80.22*** 20.90* 26.70* 22.28 (2.732) (3.682) (11.70) (16.10) (15.78) Summary Statistics Observations 127 127 127 127 127 R-Squared 0.001 0.090 0.230 0.238 0.256 RMSE 13.285 12.731 11.757 11.789 11.797 Robust standard errors in parenthesis ***p<0.01, **p<0.05, *0.1 From each of the regressions performed on the data, the results show that by increasing preparation time, it is possible to increase CAPE score, keeping all other factors constant. In addition, there are diminishing marginal returns to preparation. The first order derivative of the long regression indicates that 5.127 hours is the amount of time predicted to maximize CAPE Score and increasing
  • 6. 5 preparation time beyond that will likely not yield significant benefits. The downward sloping portion of the quadratic curve is not used for prediction since it is not covered by much actual observed data. This treatment seems reasonable since a professor who prepares 6.127 hours is expected to present a lecture at least as well as a professor who prepares an hour less, all else constant. Notice also that the coefficients from the short regression (2) are positively biased as they include the effect that variables like Average Grade Expected have on improving CAPE Score. Thus in the long regression (5), by including endogenous variables, the coefficient on Lecture Preparation Time decreases from 4.940 to 4.245 at 5% significance and we are able to relieve some of the issues that arise from omitted variable bias.
  • 7. 6 Another noteworthy result is the dependence of CAPE Score on the average grade expected in a class. On average, a 0.25 increase in expected grade distribution is associated with a 4.45 point increase in CAPE Score, keeping all other variables constant. The results show that we reject the null that average grade expected has no effect on CAPE Score at the 1% level of significance. Classes with higher grade distributions are more likely to have higher CAPE Scores. The following scatterplot illustrates the positive relationship between CAPE Score and Average Grade Expected. From the results we see that the variables Lecture Preparation Time and Average Grade Expected are highly significant. Though the other variables are not jointly significant with an F-statistic of 1.03 they are still important to keep in the model as they are associated with both lecture preparation time and CAPE Score. One reason for including years taught at UCSD is that professors
  • 8. 7 with more experience might have been teaching long enough such that they don’t need to prepare as much as professors who are new, so it is important to control to the extent that experience affects CAPE Scores. Furthermore, note the change in the significance of the constant term from the short regression to long regression. In the short regression the constant term is significant at the 1% level which suggests that there are endogenous variables in the error term that have not been identified and in the long regression the constant term becomes insignificant with a t-statistic of just 1.41, as relevant variables are added. In addition the standard error of regression (RMSE) decreases from 13.285 to 11.797 which indicates that the typical deviation from the predicted value of each CAPE score is about 11.797 and is a reasonably good fit for the model. 5 Conclusion The results support my hypotheses and shows that in fact preparation time and the CAPE Score received by professors are related. At 5% significance level, increasing preparation time before a lecture, up to 5.127 hours, is associated with having a positive effect on CAPE Score. This makes sense since by preparing more, holding all other variables constant, professors are putting more thought into the lesson which yields better results. Another important factor that influences CAPE Score is the grade that students expect to receive from their professors. At 1% significance level, we can expect high CAPE Scores to be associated with high average expected grades. A possible explanation is that difficult tests do not resemble material presented in lecture and homework assignments and may lead students to turn against the professor during CAPE evaluations. Conversely, a professor who tests on material that they allow students to practice in homework and during lecture are more likely to receive high CAPE Scores. I would like to estimate the linear causal effect that Lecture Preparation Time has on CAPE Score but a weakness of this model is that it does not account for endogenous variables in the error term
  • 9. 8 that are difficult to observe such as professor ability, resulting in omitted variable bias and a poor estimation of the coefficient of interest. Ability includes factors such as how well professors are able to communicate with students and their motivation to help students learn. For example some professors who are less stimulated by the course material may spend less effort providing thoughtful intuition to students. Some professors could have more energy in the way they speak that affects how well students are able to connect and learn from them. In a future project, panel data can be used which will allow for the addition of professor fixed effects to control for time-invariant professor characteristics such as innate teaching ability or motivation. Professor fixed effects may help to explain the data from the professors who prepare less but still get high CAPE Scores. The findings from my analysis are interesting as they indicate that professors likely have some control over CAPE scores they receive. Though each professor has their own unique teaching style, preparedness is universal and indeed better prepared instructors are recognized by students for their dedication to providing quality education.