Evaluation of Teaching Performance for Improvement and Accountablility

200 views

Published on

Presented by Larry Gould, Provost
November 2007

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
200
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Evaluation of Teaching Performance for Improvement and Accountablility

  1. 1. Evaluation of Teaching Performancefor Improvement and Accountability Center for Teaching Excellence and Learning Technologies: A Colloquium November 1, 2007 Larry Gould
  2. 2. The Current State of Affairs1. Less than useful primary instrument (e.g. formative feedback, personnel evaluation, program evaluation, advisement, who and for what? etc.)2. Poor applicability to virtual learning environment/what gets evaluated?3. Instruments inconsistent with policy4. Administration of TEVAL does not create confidence in results5. Less than efficient processing and analysis
  3. 3. An Alternative Future: Pedagogical Responsibility1. Only things important* • Were exams and other graded materials returned on a timely basis? • Was there sufficient feedback on tests and papers? • Were students tested on materials covered in the course? • Were course materials well prepared? • Did the course unfold as promised in the syllabus? • Was the instructor accessible? • No more than ten questions related to pedagogical responsibility/comments for improvement2. Virtual learning environment – support systems, use of technology, receipt of materials, etc.*adapted from Stanley Fish, “Who’s in charge here?” Chronicle of Higher Education, 2/5/2005.
  4. 4. Abuses and Misuses1. Beyond student input: over reliance on ratings in the evaluation of teaching2. Making too much of too little • Relationship between teaching and learning/how does a student know? • Biases (gender, foreign-born instructors, ethnicity, attractive professors, easy graders, untenured professors, personality, class size, type of class, subject areas, required courses, instructor contamination, etc.) • Cutting the log with a razor, 3.0 versus 3.1? Huh?3. Not enough information to make a good judgment (one course does not a teacher make)
  5. 5. Abuses and Misuses4. Questionable administration of ratings5. Using the elements of the instrument inappropriately (instructional delivery skills vs. content expertise questions)6. Confusion and lack of attention to purpose, learning environment and efficiencies in design processes7. Failure to conduct research to assess validity and reliability8. Considerations in selection of method for administration (online, paper, timing, who, etc.)

×