3. Ã Ä Å Æ Ç È É Æ Ê Ë Æ Ì Í Î
Across the nation, school administrators are being
tasked with adopting research-based evaluation
models to improve student achievement. The models
also must include student achievement measures that
impact a teacher’s effectiveness rating. Thus, the
political landscape is set for those whose job
description encompasses the responsibility for
ensuring that their teacher evaluation system meets
state standards.
To begin, let’s preface that administrators in nearly
every state face the same challenge. While there are
subtle differences among states, the general direction is
the same.
The focus on teacher evaluations is attributable to the
perception that American public schools are failing
students and that dismissal rates for identified
underperforming teachers would struggle to reach 1
percent in most school districts annually. Behind the
scenes, we know that the number of underperforming
teachers is much higher, but we have avoided
addressing the performance concerns for a multitude
of reasons. This is a reason that the current focus is on
teacher evaluations and data used to support
performance ratings.
In my state, Missouri, as a condition of receiving
stimulus money through the American Recovery and
Reinvestment Act of 2009, school districts were
directed to provide results of teacher evaluations to
state departments which reported the information to
the federal Department of Education. It was obvious
that teacher evaluations, and the administrators who
evaluate teachers, had entered a new era.
There are aspects of this new “focus” that are good, but
I also acknowledge the challenges, primarily ensuring
that only highly effective teachers are standing in front
of our students. I believe there is a sensible approach
to having a robust and defensible evaluation system.
Understanding Evaluation Systems
Let’s begin with the evaluation instrument. Evaluation
models developed by state departments and private
vendors are similar. Each seek to identify and develop:
• highly trained teachers who are lifelong learners.
• teachers who have excellent interpersonal and
communication skills.
• teachers who implement effective instructional
methodologies.
• effective assessment of student learning.
• a positive culture and climate for learning.
I am not convinced that any particular evaluation
model or system is superior to others. The
components are very similar; the differentiation lies in
the ability for administrators to fairly and consistently
implement the model.
Research-based evaluation models focus upon on what
good teaching looks like. It is well-documented that
effective teachers have a positive impact upon student
learning. In fact, they bring a value-added component
to the classroom. One can predict, with proper data,
which teachers bring more value to students over the
course of a school year, which ones bring a year’s worth
of value, and which bring less than a year’s worth.
For evaluation models to improve student
achievement, ineffective teachers must become
effective or be moved out of the classroom to make
room for an effective one. Why is something that
sounds so easy and sensible in theory difficult to
implement in practice? What can we do meet the
current state and federal requirements? I suggest the
following. It may sound basic, and it is.
1. Whoever is the system owner for teacher evaluations
must own that process. It is normally a director or
assistant superintendent for human resource services.
In smaller districts, the HR responsibilities fall upon
the superintendent. Thus, he/she must embrace the
work and be a positive advocate for ensuring that
every student has an effective teacher.
2. Ensure that your administrators have a common
understanding of the descriptors for each indicator.
The descriptors identify the actions, traits and
behaviors used to rate a teacher.
Ï Ï Ï Ð Ñ Ñ Ò Ó Ñ Ð Ô Õ Ö ×
4. 3. Ensure that all principals are within an allowable
range for knowing the accurate rating for each
indicator.
4. Regularly monitor administrator ratings for
accuracy and consistency.
5. Implement a systematic process for administrators
to denote concerns with staff. This can be broken into
three levels.
• Level one: Teachers being monitored for possible
performance issues.
• Level two: Teachers engaged in formal conversations
regarding deficiencies.
• Level three: Teachers on a professional improvement
plan.
6. Work collaboratively with the teacher associations
to reach consensus that ineffective teachers should be
removed from the classroom.
7. Ensure that appropriate professional development
processes are in place to support teacher improvement.
8. Talk frequently about the performance of staff.
9. Ensure that documentation is in place to support
administrator actions. This includes honest
conversations with staff about their deficiencies.
10. Have a system in place to de-select staff not
meeting performance standards.
Using Evaluation Technology to Your Advantage
To accommodate and facilitate each of those steps, I
researched and purchased a technology solution that
allowed us to easily manage our evaluation processes.
This technology was of value because it:
• included work flows that kept me informed of
performance issues.
• ensured that all processes were completed.
• automated professional improvement plans.
• supported inter-scorer reliability training.
• integrated teacher professional development.
• simplified monitoring of ratings consistency by
administrators
• provided detailed reports and analytics in seconds.
By using cloud-based evaluation software to effectively
manage our district’s evaluations, we made the process
and data collection easy so we could spend our time on
the difficult part: leading in the remediation or
removal of ineffective staff.
To help with that often uncomfortable task, I would
often ask our principals, “if you had the opportunity
to open a new school tomorrow, would you take all of
your current staff with you? If not, are those that you
would not take aware that their performance does not
meet your expectations? Would you want them
teaching your child?”
There is a balance between philosophy and reality.
However, we must understand what the new reality is.
Performance ratings are quickly moving into a public
spotlight.
Dr. Mark Frost, who received
his Doctor of Education
degree from the University of
Missouri-Columbia, spent 28
years in education before
retiring in 2012, including
five years as a classroom
teacher; seven years as a
building principal; and 16
years as an Assistant
Superintendent for Human
Resources, most recently for
13 years at Park Hill School District in Kansas City, Mo.
He received AASPA’s national Raymond E. Curry award
for his work in employee health and wellness, and is a
frequent speaker, presenter and writer on K-12 hiring
practices, employee health and wellness, HR processes
automation, and performance evaluation topics. Dr.
Frost currently consults with school districts for
education services software provider Netchemia.
Continue the discussion with him by email at
mark.frost@netchemia.com.
Ø Ù Ú Û Ü Ý Þ ß à Ü á à Ú Û â ã ä å
6. æ ç è é ê ë ç ì í î ï ë ð ñ
Over the past three years, the Bill Melinda Gates
Foundation has lead some of the most influential work
around multiple measures in education through their
Measures of Effective Teaching (MET) project, a
partnership of more than 3,000 public school teachers
who voluntarily opened up their classrooms to
researchers. The study looked at three measures:
value-added analysis, evaluation, and student surveys
with the purpose of investigating “better ways to
identify and develop effective teaching” as well as
“help teachers and school systems close the gap
between their expectations for effective teaching and
what is actually happening in classrooms.”
Participating districts included Denver Public Schools,
Dallas Independent School District, Memphis Public
Schools, Pittsburgh Public Schools, New York City
Schools, Charlotte-Mecklenburg Schools, and
Hillsborough County Public Schools.
In January 2013, the MET project released their final
report, “Ensuring Fair and Reliable Measures of
Effective Teaching: Culminating Findings from the
MET Project’s Three-Year Study.” The report presented
several key findings which impact teachers,
building-level leaders, district and other
administrators, as well human resources staff and
processes, including:
• Great teaching CAN be measured.
• Teachers need meaningful feedback to grow.
• Observations should be done by multiple
reviewers, multiple times. Specifically, the report notes
that shorter, more frequent observations from two or
more observers per teacher provide a more reliable
snap-shot of true teacher performance, rather than
one individual performing a single, longer
observation.
• Building processes that increase trust and fairness
will result in better data.
• Surveying students? Ensure confidentiality. The
report notes that student survey data becomes more
reliable when students feel that they are able to
provide anonymous feedback.
• Utilize multiple measures when building teacher
evaluation or performance index formulas. The
report states, “Compared with schemes that heavily
weight one measure, those that assign 33 percent to
50 percent of the weight to student achievement gains
achieve more consistency, avoid the risk of
encouraging too narrow a focus on any one aspect of
teaching, and can support a broader range of learning
objectives than measured by a single test.”
Randi Weingarten, president of the
American Federation of Teachers, said in a statement,
“The MET findings reinforce the importance of
evaluating teachers based on a balance of multiple
measures of teaching effectiveness, in contrast to the
limitations of focusing on student test scores,
value-added scores, or any other single measure.”
In recent years, several states and districts across the
country have made changes to their educator
evaluation policies, while many others are currently
considering reforms. Applying lessons from the MET
project and similar work, there are some best
practices that all HR directors and other education
leaders should consider around teacher evaluation.
1. Communication is essential. Ensure you have a
process that is documented, communicated, and
available online for everyone to see. Even if the state
department of education or state legislature has made
the policy, make sure you understand it, know where
to go with questions, and are communicating the
status of the work with staff regularly. Communication
with parents and stakeholders is also important.
2. Select great measures. Unless your data is coming
from a state contract, work carefully to select a
vendor who supplies the information, level of
customer service, and transparency you need.
Commissioned by the Bill Melinda Gates
Foundation, Battelle for Kids developed a free website,
www.edgrowthmeasures.org, with information to
consider when selecting a growth measures provider.
3. Train, check, train, recheck, and train again. Those
responsible for performing classroom observations
must be trained and evaluated continuously,
particularly when teachers’ evaluation scores are being
tied to high-stakes decisions around compensation,
ò ó ô õ ö ÷ ø ù ú ö û ú ô õ ü ý þ ÿ
7. Emily Douglas is a Director of Human Capital at Battelle for Kids, a
not-for-profit organization that works with states and school districts
across the country to improve educator effectiveness and accelerate student
growth. She has her Masters in Labor and Human Resources and MBA
from The Ohio State University Fisher College of Business as well as her
Senior Professional HR certification. In addition to her work at Battelle
for Kids, Emily keeps the K-12 Talent Manager blog for Education Week
where she explores issues, trends, and promising practices for human
capital in education. She can be reached at edouglas@BattelleforKids.org
or on Twitter at @EmilyDouglasHC.
tenure, promotion, or dismissal. It is also important to
keep the training process transparent.
4. Work constantly to build understanding. Ensure
educators know how their performance is being
measured and evaluated. Find someone who can not
only help staff understand the data, but how best to
use it to improve their practice.
5. Encourage questions and feedback. There are many
places in the evaluation process where error or issue
can occur. Giving those evaluated the ability to
identify the issue and voice a resolution is important
for fairness, validity, rigor, and legality.
6. Be strategic in weighting measures. The MET report
found that evaluation systems which “assign 33
percent to 50 percent of the weight to student
achievement gains achieve more consistency… and
can support a broader range of learning objectives
than measured by a single test.” Multiple measures—
weighted appropriately—are critical to developing an
evaluation system that is reliable and helps educators
improve their practice. It is also important to
remember that it’s not necessarily the more measures
the better, but the more RIGHT measures the better.
7. Keep an eye on your data: If preliminary evaluation/
observation data doesn’t match the other performance
data you are collecting, look for ways to ensure
validity of evaluation scores. For example, the MET
project found that evaluations of teachers done by
multiple individuals and someone other than their
principals may yield more accurate results. Don’t get
to the end of the school year and then realize that
there are issues with the data that could have been
addressed early on.
8. Practice comprehensive and strategic human capital
management: Align your evaluation system to other
human capital initiatives, such as hiring, professional
development, and succession planning. It is essential
to give teachers and principals the chance to reflect on
their practice as well as time, resources, and
opportunities to improve.
Teacher evaluation will continue to be a hot topic in
states and districts across the country. No matter how
your district chooses to measure and evaluate
educator effectiveness, the MET report and similar
research from across the country offers important
lessons to consider in designing and implementing
these systems. It will be interesting to see how this
work influences federal, state, and local education
policy in the years to come.
Sections of this article first appeared in the K-12
Talent Manager blog on Education Week,
TopSchoolJobs in January 2012 and January 2013.
Used with permission from the author.
¡ ¢ ¢ £ ¤ ¢ ¡ ¥ ¦ § ¨
10. Q R S T U V W T Q X Y ` a b R c Q d R a b
e f g h i p q r s t p u t h i v f e w
Students, teachers, site administrators, district and county leaders were asked the following question: what makes
an “effective teacher?” Eighteen high school students helped capture the answer to this question on camera.
When reviewing the video clips, the most frequently heard responses were: 1) The teacher likes me (shows the
students they do); 2) Effective teachers know their content; 3) They know how to teach their content; and, 4)
They make the content relevant to the student. It is simple, right? So why do we create such complicated
measures to analyze these four attributes of effective teachers? How can we get to the core of effective teaching,
analyze, and evaluate its presence?
San Diego County Office of Education’s Teacher Effectiveness and Evaluation Project has been under development
for three and one-half years. When discussing the content with educational leaders, their first question is: “May
I see the tool?” Through listening, analyzing, and researching the information gathered during the journey of
defining and evaluating effective teaching, the conclusion is: It is not the tool, it is not the event of observation
and evaluation, it is the Process that makes the difference.
Charlotte Danielson’s, The Framework for Teaching; Robert Marzano’s, Teacher Evaluation SCALES; and the Gates
Foundation’s, MET Project Measures of Effective Teaching support these findings.
Based on the information
gathered, the following Five
Step Process was developed.
Five Step Process
When approaching the task
of changing your teacher
evaluation system, several
conclusions might be drawn
as a result of conversations,
observations, and data
collection. Among these
conclusions is the fact that
all stakeholders must be
involved in change from the
onset and all must
understand and agree upon
the criteria to evaluate
teacher effectiveness.
The steps involved in this process begin with identifying the critical components of effective teaching to narrow
the focus. Once identification is accomplished, defining the criteria provides a common language for describing
classroom implementation. Next, calibrating ensures consistent recognition of the identified teaching practices
during observations. This is followed by collegial conversations, based on observation evidence. The final
component is applying these steps on a regular basis; thereby creating an evaluation system that is an ongoing
process and not an event.
Step One: Identify effective teaching practices
All stakeholders should have multiple meetings to discuss their viewpoints related to the question: “What is an
effective teacher?” This team must determine the critical components of effective teaching in relation to their
11. x x x y € € ‚ € y ƒ „ … † †
local context and based on standards. Some of the research-based standards to consider are:
• The California Standards for the Teaching Profession (CSTP)
• Essential Elements of Instruction (Madeline Hunter/Sue Wells/Welsh)
• Framework for Teaching (Charlotte Danielson)
• The Art and Science of Teaching (Marzano)
• Direct Instruction (Madeline Hunter)
• National Center for Urban School Transformation (Key focus areas)
• Gradual Release of Responsibility (Fisher/Frey)
Step Two: Define effective teaching
Once effective teaching criteria are identified, clear definitions of each criterion need to be determined.
There must be a shared understanding and a common language used to describe the effective practices. Once
established, this common language leads to consistency when communicating with stakeholders. When
collaborating to establish this common understanding of effective teaching, it is critical to include a variety of
stakeholders. This involvement of all stakeholders leads to more effective implementation and shared ownership
in the process.
Step Three: Calibrate to ensure consistency
Following the identification and definition of effective teaching practices, it is critical to begin the process of
calibration in order to establish consistency. Engaging in “instructional rounds” to conduct observations may
take place in teams across all stakeholder groups, teams from like subject areas/grade levels, school teams,
district teams, site administrators or a combination of all of the above. Prior to beginning, it is critical that all
observers have a clear understanding of the difference between evidence and opinion, as they collect observation
data, to ensure that the data is evidence-based.
The observation should be discussed until all observers agree on what they are seeing and are consistent in
identifying teaching practices with the established criteria and common terminology. Teams may also receive
additional practice through video observations.
Step Four: Conduct collegial conversations
Collegial conversations complete the process. Grade level, school site, cross-curricular and P-12 teams engaging
in system conversations should become the norm. These conversations may also be in the form of Professional
Learning Communities (PLCs). The discussions should focus on the observed effective teaching practices and
the impact on student achievement. Prior calibration related to the critical components of effective instruction
and their definitions leads to conversations that deepen understanding by using observation data and student
work to draw conclusions.
Through collegial discussions, ongoing calibration of consistent practices is maintained and inter-rater
reliability is achieved. This is an important component of the process.
Step Five: Apply the steps to the evaluation process
Steps one through four of this process are then applied across the evaluation system in both formal and
informal settings. In order for evaluation to be transformed from an event into an ongoing process, an
environment of trust and a shared goal of effective teaching practices that impact student achievement must be
established. An evaluation process that establishes the goal of lifelong learning in a collegial setting, based on
data and feedback, establishes a process that is embraced rather than an event that has minimal impact on
practice. The following are the two settings in which these steps would be implemented. Both processes utilize
the five-step system.
12. Informal Evaluations
Informal walk-throughs verify the consistent implementation of effective teaching practices and the impact on
student learning. The more frequent the site administrator’s informal observations and feedback, the greater the
impact. In settings where these steps have been consistently implemented, site administrators are in each
classroom a minimum of once per week.
Other informal observations occur through the practice of teachers observing teachers as a common occurrence
that is then followed by collegial conversations. The data collected through these informal walk-throughs
provides information that is used to inform teaching practice at both the site and district levels.
Formal Evaluations
The formal evaluation process is generally district-specific; however, there are commonalities in formal
evaluations that include options, timelines, pre-observation conversations, observations and post-observation
conversations.
In most cases, site administrators have a pre-observation conversation with the teacher to discuss the lesson
design for the formal observation. The formal observation results in a collection of evidence. Observation
evidence may include time sweeps, student behaviors, teacher behaviors, direct quotes, or student work. This
evidence not only provides the evaluator with insight into a teacher’s classroom practice, it also allows the
teacher to become a more reflective practitioner. The observation is followed by the post-observation
conversations wherein the data and evidence are shared and a coaching conversation ensues.
The Teacher Effectiveness and Evaluation website has been created as a result of the development of this
structure. The structure of the website mirrors the Five 5 Steps Process. The intent is for teams to develop a
system based on group consensus for each step. If the entire staff is not included as each step is defined, we will
once again have an event-based evaluation system rather than an evaluation system that is an ongoing Process.
As an educator, Marsha Buckley-Boyle has been in the K-12 setting for over 38 years
teaching every grade in some capacity except 11th. Several of those years included being a
Reading Specialist in the Santee School District. Her teaching experience has also expanded to
include being an instructor at National University. Marsha’s education background
includes: BA from UCLA, MA is from Pepperdine Univ., Reading Specialist degrees from both
State University of New York-Buffalo as well as SDSU and an Administrative Credential from
Chapman University.
Her administrative experiences began as a Language Arts Specialist, followed by launching the
South County BTSA (Beginning Teacher Support and Assessment) Consortium, supporting
beginning teachers and their support providers from five districts, through a CA’s Induction
program. That directorship led her to becoming part of a State regional team of 10 leaders
supporting BTSA programs throughout CA. Her current region is Orange County, supporting
15 programs in that area. San Diego County Office of Education’s Human Resource
Department is Marsha’s home base for the BTSA work as well as a special project, defining
effective teaching and evaluation. She says each experience builds on the knowledge and skills
she continues to acquire. These opportunities keeps her motivated and encouraged to be an
innovator, as well as a life-long learner.
‡ ˆ ‰ ‘ ’ “ ” • – ’ — – ‘ ˆ ˜ ‡ ™
h
h
14. Evidence-based professional practices may include
specific roles and responsibilities such as the
facilitation of IEPs, coordination of related service
personnel, instructional leadership, supervision of
paraprofessionals, as well as professional development
and collaboration. Building consensus on
descriptions and emphasizing examples of evidence
provides opportunities for professional conversations
on the process leading to continuous improvement.
Focus on the similarities for all school leaders, while
aligning examples of evidence based on respective
roles, leads to a framework of professional practice
that provides effective formative evaluation resulting
in evidence-based summative evaluations.
Student growth can be reasonably included as part of
the evaluation process and is an integral part of the
continuous improvement plan. Measures of student
growth are applicable for special education
administrators and require careful review and
application. Districts need to review current
assessment measures used for special education
students, making certain assessments are accessible
and students with disabilities can accurately
demonstrate growth. Professional development on
how to select and/or develop assessments that are
accessible and measure student growth may be
needed. If districts are studying the use of Student
Learning Objectives (SLOs) they should discuss how
students with disabilities can be included by setting
target areas, developing example SLOs and providing
guidance on how to differentiate learning targets that
take into account past learning trajectories and
present levels of performance.
The Response to Intervention Framework (RtI) may
be an important component in implementing student
learning objectives and measuring growth. Data from
universal screenings provide baseline data and identify
students needing more intensive instruction resulting
in more frequent progress monitoring. Taking into
account past learning trajectories and students’
present levels of performance provide guidance in
how to differentiate learning targets. Administrator
and staff discussions on student measures for diverse
learners should connect the purpose with what is
being assessed and how the growth is determined.
Illinois requires Principal Performance Evaluations to
include at least two assessments that measure student
Effective leadership has a significant influence on
establishing and implementing successful educational
practices, including addressing the needs of students
with disabilities. Providing administrators who are
responsible for special education programs and
services meaningful and relevant feedback is essential
in improving outcomes for students. Evaluation should
be an integral part of continuous improvement and the
summative evaluation one part of a process designed
to improve leadership. A shift in our approach to a
more comprehensive approach to evaluation as part
of that process will emphasize the importance and
connectedness of student growth as an integral part of
what great leaders and educators do.
Reflecting an inclusive approach to a complex process,
evaluation of special education administrators should
be more similar than different to the evaluation of
other administrators. Educational systems that utilize
a framework that connects current practice to student
outcomes should also look at effective ways to
customize the aspects of the evaluation process that
provide examples of evidence appropriate for the
administrator’s focus. An obvious but often
overlooked message is that administrators are
accountable for student growth for all learners,
including students with disabilities. Designing an
evaluation model that is applicable for all
administrators promotes a shared responsibility to
ensuring students receive the wide range of specialized
interventions and accommodations needed to be
successful.
Professional practice frameworks, or rubrics, should
take into account the multitude of capacities special
education administrators serve in a district including
program administrators, coordinators, consultants,
and directors. Designing a system to evaluate
performance that will accommodate all leadership
roles should include consensus on descriptions and
explicit examples that capture evidence of practice. It
would also be important to consider what the roles
and responsibilities and what aspects of the
administrators’ work can be measured or analyzed.
Since a predominant responsibility in special
education is ensuring students’ Individual Educa-
tion Plans (IEPs) are met, oversight and leadership in
development and implementation of data-based IEPs
becomes a qualitative and quantitative focus.
d e f g h i e j k l h m n o p q r s t u v w e x g n o m y
z { | } ~ € ‚ ƒ „ ƒ } ~ … † z ‡
15. growth. Assessments must include either a Type I
(assessment scored by a non-district entity and widely
administered beyond Illinois), Type II (assessment
developed or adopted by the school district and used
on a district-wide basis that is given by all teachers in
a given grade or subject area), or Type III (assessment
that is rigorous, aligned with the course’s curriculum
and measures student learning) when a majority of
students are not administered a Type I or Type II
assessment. Below is a current administrator’s student
growth goal in literacy at a school for students with
multiple disabilities:
• Based on the school-developed Literacy Assessment
data gathered over the past two years, 80% of students
will demonstrate measured growth in independently
recognizing core curriculum vocabulary words using
the prompt hierarchy.
• Student growth calculation – students included in
student growth metric as long as the student has been
assigned to the school/program long enough to have
at least two data points on a comparable assessment.
• School/program goal aligned to Board Goal –
Student Achievement
The assessment measure has clear connection to the
curricular program and individual student goals in
literacy. Continued discussions on the
implementation of the literacy-based assessment and
analysis of student growth will focus on next steps and
further implementation of the assessment
district-wide.
Our educational organization is in the process of
completing our first year of implementing an
evaluation system based on leadership standards and
student growth measurements. We will continue to
have discussions on what constitutes evidence-based
practice for special education administrators. Next
steps include a review of our framework and
application based on an alignment of current job
descriptions and customized rubrics with an emphasis
on evidence. We believe an evaluation model which
includes a cycle of: Roles, Goal Setting, Observations,
Self-assessment, and Summative Evaluations leads to
a Continuous Improvement System of Induction and
Mentoring, Professional Development, Supervision,
and Evaluation equating to quality educators and im-
proved outcomes for students.
Cathy Kostecki is in her
eighth year as the Director
of Human Resources and
Instructional Services for the
Northwest Suburban Special
Education
Organization (NSSEO)
located in Mt. Prospect,
Illinois. For the past year,
Ms. Kostecki and Dr. Judy
Hackett have partnered with
the North Suburban Special
Education District to align
the Illinois Performance Standards for School Leaders to
the Principal/Administrator Professional Practice
rubric. The resulting Principal/Administrator
Evaluation Plan for NSSEO administrators establishes
clear indicators for professional practice and
incorporates student growth in each special education
administrator’s evaluation. Ms. Kostecki is currently the
Co-Chair of the Professional Development Committee
for the Illinois Association of School Personnel
Administrators and was Publicity Chair of the AASPA
2012 Chicago conference committee. During her career,
Ms. Kostecki has held the positions of teacher,
diagnostician, coordinator, assistant principal, and
university instructor.
Dr. Judy Hackett is in her
sixth year as Superintendent
of NSSEO, a special education
cooperative that services eight
elementary and high school
districts in the northwest
suburbs of Illinois. She serves
on several boards comprised
of district superintendents,
(IASA) and participates in
regional, state and national
groups that focus on
leadership, student data
systems, leadership professional development,
legislation and funding reforms. Her previous leadership
and teaching experience included fifteen years as an
assistant superintendent in the fourth largest district in
IL and has worked as a consultant, supervisor,
university instructor and teacher throughout her
extensive career in education. She presents at state and
national conferences on multi-tiered systems of support
and data, legislation, and currently on legal and systems
change aspects of evaluation.
ˆ ˆ ˆ ‰ Š Š ‹ Œ Š ‰ Ž ‘
18. written reflections. This includes, “Materials which support the acquisition of new knowledge or skills
[including] videotapes, audiotapes, student products, teacher products, feedback from students, parents, peers
and administrators, etc.” (AASPA, p. 170) and…”lesson plans, anecdotal records, student projects, class
newsletters… annual evaluations, letter of recommendation, and the like” (Wolf, p. 35).
Since the teacher must reflect and decide what items merit being included in the portfolio, the process becomes
a structured self-assessment and opportunity for reflection and analysis (Stronge Tucker, 2003; Danielson,
1996). The teacher and administrator or peer team of teachers then reviews the portfolio as part of the
evaluation. AASPA (2002) contends that portfolios should be linked to the goal process. It states, “The most
critical knowledge and skills identified in the teacher’s annual goals should be evaluated based upon the
corresponding evidence statements supported by the documentation contained in the portfolio” (p. 171).
Authors agree that it is not sufficient for administrators to diagnose data and inform teachers of the strengths
and weaknesses. Rather, formative teacher evaluation needs to provide a structure for individualized professional
growth through a process of self-assessment, goal setting, and feedback from such sources as peer review, peer
coaching, and portfolio development (Egelson McColsky, 1998; Howard McColsky, 2001; Ribas, 2005). A
coaching or mentoring session should accompany the sharing of feedback about the portfolio with the teacher.
The coach’s role is to help the teacher think deeply about the findings and then work together with the teacher to
create a goal or action plan based on the feedback to help the teacher develop professionally (Costa Garmston,
1993).
Eportfolios: Trend for the Future
Traditionally, portfolios have been created in hard copy, often in three ring binders. More recently, electronic
portfolios (eportfolios) are being used because they afford certain advantages compared with their traditional
paper counterparts. In 2013, The National Board of Professional Teaching Standards adopted the use of
eportfolios in the evaluation of national board certification candidates (NBPTS, 2013, p. 3). There is relatively
little research on the use of electronic portfolios in the teacher evaluation process, however, a review of research
reveals ample data about the value of using electronic portfolios to assess K-12 student learning and university
pre-service teacher candidate growth.
The use of electronic portfolios for the professional development of teachers in the teacher evaluation process
holds potential. Advantages include the ability to save information, pictures, recordings, and videos on an
external website, a computer hard drive, or flash drive. Saving in these ways, minimizes the risk of having a
single copy of information lost or destroyed. Electronic portfolios allow for the documentation of multi-media
evidence that cannot be included in paper format. This includes videos, podcasts, wikis etc. Electronic portfolios
can be shared by web links and passwords, allowing multiple viewers to see the portfolio simultaneously, view
the eportfolio electronically at any time, and have ready access to the eportfolio for discussion with the teacher.
Relatively few disadvantages exist. Viewers who are more novice to this form of technology, may benefit from
training.
Conclusion
Many authors concur that a main purpose of teacher evaluation is to promote the professional growth of
teachers (AASPA, 2002; Costa Garmston, 1993; Danielson, 1996, 2001, 2007; Danielson McGreal, 2000;
Egelson McColsky, 1998; Howard McColsky, 2001; NAESP, 2001; Ribas, 2005; Stronge Tucker, 2003).
Utilizing multiple qualitative and quantitative data sources offers a more complete ‘picture’ of a teacher’s
performance than just observation or other single data sources. Portfolios, especially electronic portfolios, can
capture and preserve this evidence of teaching. Key to the portfolio construction is the inclusion of meaningful,
evidence-filled artifacts. Together with peer coaches or administrators, teachers can have meaningful
ª « ¬ ® ¯ ° ± ² ³ ¯ ´ ³ ® µ ¶ ª ·
19. conversations about their portfolio evidence. Leaders trained in coaching techniques and formative
development conversations, can guide the conversation in a way to provide ongoing professional development
for teachers throughout their careers. Eportfolios hold the potential to improve artifact options and preservation
and provide ready access facilitating teacher and leader conversations that promote teacher professional growth
with the ultimate goal of increased student achievement.
REFERENCES
- American Association of School Personnel Administrators. (2002). Teacher of the Future. Alexandria, VA:
American Association of School Personnel Administrators.
- Costa, A. Garmston, R. (2002). Cognitive coaching: A foundation for Renaissance schools. Norwood, MA:
Christopher-Gordon Publishers.
- Danielson, C. (1996). Enhancing professional practice: A framework for teaching. Alexandria, VA: Association
for Supervision and Curriculum Development.
- Danielson, C. (2001). New trends in teacher evaluation. Educational Leadership 58 (5), 12-15.
- Danielson, C. (2007). Enhancing professional practice: A framework for teaching, 2nd ed. Alexandria, VA: As-
sociation for Supervision and Curriculum Development.
- Danielson C., McGreal, T.L. (2000). Teacher evaluation to enhance professional practice. Alexandria, VA: As-
sociation for Supervision and Curriculum Development.
- Egelson, P. McColskey, W. (1998). Teacher evaluation: The road to excellence. Greensboro, NC: SERVE.
- Howard, B. McColskey, W. (2001). Evaluating experienced teachers. Educational Leadership, 58 (5), 48-51
- National Association of Elementary School Principals. (2001). Leading learning communities: Standards for
what principals should know and be able to do. Alexandria, VA: Author.
- National Commission on Excellence in Education. (1983). A Nation at Risk: The Imperative for Educational
Reform. Retrieved from http://www.ed.gov/pubs/NatAtRisk/index.html on January 10, 2008.
- National Board of Professional Teaching Standards (2013). Guide to electronic submission. Retrieved from
http://www.nbpts.org/sites/default/files/documents/ePortfolio2013/Guide_to_eSubmission_2013.pdf
- Ribas, W. (2005). Teacher evaluation that works! Westwood, MA: Ribas Publications.
- Stronge, J. Tucker, P. (2003) Handbook on teacher evaluation: Assessing and improving performance. Larch-
mont, NY: Eye on Education.
- Wolf, K. (1996). Developing an effective teaching portfolio. Educational Leadership 53 (6), 34-37.
Dr. Ann Gaudino is as an assistant professor at West Liberty University, West
Virginia where she teaches graduate courses in education and education leadership. She
has also served as department chair, Coordinator of Clinical Practice, and Director of
Professional Development Schools. She is the founder and editor of The Excellence in
Education Journal (www.excellenceineducationjournal.org) an open access, refereed,
online journal that promotes and disseminates international scholarly writing about
excellent practices in all aspects of education. Prior to this appointment, Dr. Gaudino
served as school district Assistant Superintendent, principal, and teacher.
A native of Pittsburgh, Pennsylvania, Dr. Gaudino holds a Doctorate in
Education Administration and Policy Studies from The University of Pittsburgh,
Specialist degree in Education Administration from Wayne State University in Detroit,
and Masters and Bachelor’s degrees from The University of Michigan in music education and organ performance.
Dr. Gaudino holds certification as superintendent, assistant superintendent, central office supervisory, principal,
elementary education, reading specialist, and music education in Michigan, Pennsylvania, and West Virginia.
¸ ¸ ¸ ¹ º º » ¼ º ¹ ½ ¾ ¿ À Á
20. Â Ã Ä Å Æ Ç È É Ê Ë Ç Ì Ë Å Æ Â Ã Í Î
Some see us as education’s odd couple—one, the
president of a democratic teachers’ union; the other,
a director at the world’s largest philanthropy. While
we don’t agree on everything, we firmly believe that
students have a right to effective instruction and that
teachers want to do their very best. We believe that one
of the most effective ways to strengthen both teaching
and learning is to put in place evaluation systems that
are not just a stamp of approval or disapproval but a
means of improvement. We also agree that in too many
places, teacher evaluation procedures are broken—
unconstructive, superficial, or otherwise inadequate.
And so, for the past four years, we have worked
together to help states and districts implement effective
teacher development and evaluation systems carefully
designed to improve teacher practice and, ultimately,
student learning.
While many factors outside school affect children’s
achievement, research shows that teaching matters
more than anything else schools can do. Effective
teaching is a complex alchemy—requiring command
of subject matter, knowledge of how different children
learn, and the ability to maintain order and spark
students’ interest. Evaluation procedures must address
this complexity--they should not only assess individual
teachers but also help them continuously improve.
Yet both of us have become increasingly concerned
that states and districts are doing evaluation quickly
instead of doing it right, which could have serious
adverse effects.
The Bill Melinda Gates Foundation launched the
Measures of Effective Teaching (MET) study in 2009 to
identify effective teaching using multiple measures of
performance. The foundation also invested in a set of
partnership sites that are redesigning how they
evaluate and support teaching talent.
Ï Ð Ñ Ò Ó Ô Õ Ö × Õ Ó Ø Ò Ù Ú × Ó Û Ü Õ Ý Þ Õ ß à Õ á á Õ â ã
And the AFT has developed a continuous
improvement model for teacher development and
evaluation that is being adapted in scores of districts
to help recruit, prepare, support, and retain a strong
teaching force.
From our research, and the experiences of our state
and district partners, we’ve learned what works in
implementing high-quality teacher development and
evaluation systems:
1. Match high expectations with high levels of
support.
Teacher evaluations should be based upon
professional teaching standards that spell out what
teachers should know and be able to do. Teachers
should receive regular, timely feedback on their
performance and support to get better. The
responsibility for improving teaching shouldn’t rest
with teachers alone. Measures of effective teaching
enable school systems to better support teachers’
improvement needs and to determine if teachers have
the tools and school environment conducive to good
teaching. Sound measures help school systems know
where to target professional development and whether
those efforts work. The goal of the process should be to
systematically improve teacher practice and increase
student learning.
2. Include evidence of teaching and student learning
from multiple sources.
Measures of student learning gains commonly based
on end-of-year tests provide teachers with too little
information too late and may not reflect the full
breadth and depth of instruction. We know that a
balanced approach works best (teacher observation,
student work, and student assessments, for example)
and both our organizations are conducting what could
be called RD in this area. The Gates Foundation’s
MET project (much but not all of which the AFT
agrees with) has found that combining a range of
measures—not placing inordinate weight on
standardized test scores—yields the greatest reliability
and predictive power of a teacher’s gains with other
21. students. And the AFT and its affiliates are exploring
ways to accurately determine what measures best serve
as a proxy for our work.
3. Use information to provide constructive
feedback to teachers, as befits a profession, not to
shame them.
The aim of evaluation should be to improve teacher
practice, not to sort or shame. Districts such as Los
Angeles and New York City have publicly released
teacher rankings. Both the AFT and the Gates
Foundation have criticized this practice. As Bill Gates,
the co-chair of the foundation, wrote in The New York
Times, “publicly ranking teachers by name will not
help them get better at their jobs or improve student
learning. On the contrary, it will make it a lot harder
to implement teacher evaluation systems that work.”
A more productive approach is to hold principals and
districts accountable for the continuous improvement
of teachers, including giving teachers key supports and
dismissing teachers who do not improve even after
receiving help.
4. Create confidence in the quality of teacher
development and evaluation systems and the
school’s ability to implement them reliably.
This means using a valid rubric for observing teacher
practice; training and certifying raters to ensure they
can observe classrooms fairly and consistently; and
observing teachers multiple times, using multiple
observers: administrators and peer or master teachers.
It also means preparing principals and others to give
skilled feedback that can support teachers’ growth.
5. Align teacher development and evaluation to the
Common Core State Standards.
MET data show that most teachers are a long way
from confidently handling the instructional shifts
necessary to meet the Common Core State Standards.
For example, while most teachers are adept at
classroom management skills, teachers have long been
taught to fit a lot of material in a short period of time,
not to ask high-level questions or to engage students
in rigorous discussions.
Luckily, this is also an area with huge, untapped
potential. For example, Teach Live, developed by the
University of Central Florida, enables teachers to
practice new techniques in simulated classroom
environments before trying them with real students.
Tutor.com provides teachers with individualized,
online coaching on how to teach concepts. And the
AFT, with Britain’s TES Connect, has developed “Share
My Lesson,” an online community for U.S. teachers to
collaborate and share teaching resources and
innovative ideas, with a significant emphasis on
resources to guide teachers in implementing the
Common Core.
Of course, school districts must also provide
continuous and relevant professional development and
growth for teachers that address their skills,
knowledge, and needs.
6. Adjust the system over time based on new
evidence, innovations, and feedback.
It’s essential that states and school systems measure the
extent to which new teacher development and
evaluation systems are being implemented with
fidelity, meeting their original purposes without
creating unintended negative consequences. We fully
anticipate the need to continuously update measures
of effective teaching and the best ways to use them, as
more research and experience become available.
Teacher development and evaluation must be a
vehicle to achieve the mission of public schooling.
And that mission must evolve from an outmoded
model of education that exists in too many places to
a new paradigm that will prepare students for life,
college, and career. Teachers must have a system of
professional growth that reflects the sophistication
and importance of their work, and they must have a
meaningful voice in that system. Just as we have high
expectations for teachers, we must also for leaders.
Officials must invest in these systems—it is more
important to do it right than to do it cheap. And, lest
anyone expect that teachers, single-handedly, can save
public education, we must also focus on the
accountability and responsibility that rest with school
and government leaders to ensure that students and
teachers have the opportunities and supports they
need to succeed.
This article first appeared in New Republic on March
25, 2013. Used with permission from the authors.
ä ä ä å æ æ ç è æ å é ê ë ì í
23. http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.aaspa.org/membership/members_login
http://www.crownglobalconsulting.com
Have you logged in to the Members Only section of the AASPA Website?
C D E F E G H E I P Q R S T P E U V W R W X Y D E ` ` a b ` c E H P d Y E U W R Y e d R P e c E e S Y D W X d R X W I G e V W R e R f
I E P W g I U E P f E P d h R E f Y W D E S i T W g i E I X W I G T W g I p W H e P E q E U V r E S T e R f E s U d E R Y S T e P i W P P d H S E t
e R f h E Y Y D E G W P Y W g Y W X T W g I ` ` a b ` G E G H E I P D d i u v X T W g D e r E R W Y f W R E P W t r d P d Y
c c c u e e P i e u W I h Y W e U U E P P Y D E P E R g G E I W g P H E R E w Y P e R f r e S g e H S E I E P W g I U E P u v R Y D E F E G H E I P
Q R S T P E U V W R T W g c d S S w R f x
y
a e G i S E € W U g G E R Y P
y ‚ ƒ „ … ‚ † ‡ ˆ ‚
R E c P S E ‰ E I
y ‚ „ ‘ ƒ ’ † ‡ † ‚ „
G e h e “ d R E
y
b E E I ” Y W ” b E E I • W I g G
y –
E S i X g S P d Y E S d R — P
C W e U U E P P F E G H E I P Q R S T t T W g c d S S R E E f Y W E R Y E I T W g I g P E I R e G E e R f i e P P c W I f u C W U I E e Y E
e g P E I R e G E e R f i e P P c W I f r d P d Y c c c u e e P i e u W I h ˜ G E G H E I P D d i ˜ G E G H E I P ™ S W h d R d e W h ” d R
–
E S i
b e h E f e R f U W G i S E Y E Y D E X W S S W c d R h P Y E i P x
g u • d S S W g Y Y D E w r E H W h E P e Y Y D E H W ‰ W G W X Y D E e W h d R
–
E S i b e h E x i P E I R e G E t b e P P c W I f t
j
W R w I G b e P P c W I f t F E G H E I k e R f e e P Y l e G E u
m
u
j
S d U — Y D E n a e r E o H g ‰ W R e R f Y D E P d Y E P D W g S f d G G E f d e Y E S T S W h T W g d R Y W Y D E F E G H E I P Q R S T
P E U V W R u
p
u v X T W g E h i E I d E R U E f d s U g S Y T S W h h d R h d R t i S E e P E E ” G e d S q W E r E e I R E T e Y Y D E ` ` a b ` l e V W R e S
Q s U E e Y p W E s e e P i e u W I h W I U e S S d t g
p
f
p m u
” g
m m m
u
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
http://www.crownglobalconsulting.com
24. v w x y z { | } ~ { € y z v ‚ ƒ
The research and development that went into the
standards was extensive, and the product is being
piloted and researched by districts statewide. While
districts are not being required to adopt the document
that was developed by DESE, there are
non-negotiables, which must be included in each
school district’s teacher evaluation instrument by the
2014-2015 school year. The new system developed was
based on seven research-based essential principals.
1. Research-based practices
2. Differentiated levels of performance
3. Probationary period for new educators
4. Use of measures of student growth in learning
5. Ongoing, deliberate, meaningful and timely
feedback
6. Standardized and periodic training for evaluators
7. Evaluation results to inform personnel employment
determinations, decisions, and policy Senate Bill 291
directed school districts to adopt teaching standards,
which were to include the following elements:
• Students actively engaged in learning process
• Various forms of assessment
• Teacher is prepared and knowledgeable of content
• Uses professional communication and interaction in
school community
• Keeps current on instructional knowledge
• Responsible professional in overall mission of school
It is important that teachers have a full understanding
of the expectations, and the expectations are stated
clearly and understandable. Expectations need to be
general enough for teachers to have autonomy in the
execution but specific enough that there is no doubt
about what should be done, what should be the
outcome.
What is fair yet sets the stage for improvement?
In the past teacher evaluations did not necessarily take
into consideration the areas in which a teacher should
improve. The evaluation was more of a formality that
was required by the Personnel Department and by
state law, but was not necessarily connected to student
achievement or improvement in teaching. Almost all
teachers were evaluated with a mark of Satisfactory.
There seemed to be a lack of connection on the part of
In the early 1700’s teachers were not considered
“professionals” but were usually chosen by local clergy
or the local government Evaluation of their work was
based on their morale, character, or whatever the local
authority felt was important.
In the 1800’s the movement was toward common
schooling and less simplistic schooling systems.
Teaching required more expertise and therefore more
supervision. The industrial revolution brought along
with it wage and labor issues. World War I and the
development of Aptitude and Intelligence Quotient
tests became a way of categorizing and placing people
in the “appropriate” profession track or ability group.
All of these events have effected where we are in
education and more specifically with teacher
evaluation.
Fast forward to the present day. Now, teachers
evaluated in Missouri with the Model Teacher
Standards and Indicators experience even higher
stakes than ever. Higher stakes, performance of
students, and the social and political climates have
created more questions about what the expectations
should be for teachers. By what standards should
teachers’ teaching be evaluated? What is fair yet sets
the stage for improvement? What is expected to
happen with the teacher evaluation? How can we
ensure that the evaluation process for teachers is
conducted with fidelity from teacher to teacher,
building to building, district to district?
By what standards should teachers be evaluated?
The Missouri Department of Elementary and
Secondary Education has developed teacher and leader
standards. The work on these standards begun in 2007
and adopted in 2011 by the State Board of Education.
During this time the No Child Left Behind (NCLB)
expired and the focus was on obtaining a waiver from
having to attain one hundred percent of students
performing at the “Proficient” level that was required
in NCLB from the federal government. The standards
in Missouri were developed by a group of
stakeholders, which included teachers, principals,
regional professional development representatives and
college/university representatives.
„ … † ‡ ˆ ‰ Š ‹ Œ Œ Ž Ž ‘
25. the principal and maybe the district between
improvement of instruction and student achievement.
Principals spent more time with the management side
of the school rather than the being the instructional
leader. Sometimes Principals themselves were not well
prepared to provide effective feedback. Lack of training
for principals in the appropriate manner of
observation, and evaluation creates a lack of fidelity,
and a lack of consistency teacher-to-teacher,
building-to-building, and district-to-district.
Few teachers were rated on the levels of Needs
Improvement or Unsatisfactory and if rated on those
levels it was a way for the principal to remove the
teacher from the classroom rather than provide
information for the teacher’s performance, what
professional development might be needed which
would result in improved instruction. The format
that has been developed by Missouri’s Department
of Elementary and Secondary Education has a strong
component for teacher improvement. In the Missouri
DESE model, teachers and administrators collaborate
to choose an area in which the teacher needs or wants
to improve. Supporting and professional development
activities are developed for each teacher’s specific
needs based on data and all goals are focused on the
school and districts’ goals for student achievement.
What is expected to happen with the teacher
evaluation?
States and districts that are seeking to be a part of the
Waiver from the federal government are developing
standards by which teachers should be evaluated. The
expectation is for the new evaluation system to
increase the capacity of teachers. Building the teacher’s
ability to determine the academic needs of the students
and to make research and data informed decisions to
approach the needs of the students will result in higher
student achievement.
In the state of Missouri, the new system is going
through the pilot stage with several districts
throughout the state. Feedback will be given to the
state, which may result in changes made in the current
state teacher evaluation model. Research tells us that
teacher evaluation system should provide the teacher
with the opportunity to, reflect on his/her professional
performance, which would include consideration for
student achievement as one aspect. Teachers should be
able to make professional decisions about their own
performance and collaborate with the principal to
create a professional development plan that is specific
to the areas in which improvement should be made.
How can we ensure that the evaluation process for
teachers is conducted with fidelity?
The building administrator should collect data on a
routine basis both formally and informally through
walk-throughs, conversations, and a variety of student
achievement data. Evaluations should be supportive,
create an attitude toward improvement of instruction
and therefore student achievement. Teachers should be
evaluated on not only the delivery of curriculum, but
also on their classroom management, the climate of
their classroom, and the climate they contribute to the
building and district as a whole. Evaluation on student
engagement, as well as on student achievement should
also be factors.
Administrators should be given extensive professional
development on the evaluation instrument that is
used. Included in the professional development should
be examples of each criterion, a video clip of a teacher
teaching which allows the administrator to rate the
teacher according to the criteria. There then should
be comparisons made on the administrators’ rankings
and a discussion about why they ranked the teacher as
they did. This helps administers understand the
process and gives them feedback on how other
administrators view a teachers performance. This
training must occur on a regular bases, annually. This
would promote the importance of informed
evaluations for teachers as well as fidelity to the
evaluation process teacher-to-teacher,
building-to-building and district-to-district.
Conclusion
The only way to make sure the evaluation process is
fair and promotes instructional development is to
make sure the standards are clear for the teachers and
the evaluator is well versed in the standards and
expectations and is provided regular professional
development on teacher evaluation.
Standardization and norming of the process creates
fidelity in the evaluation. As with any evaluation
system if it isn’t used appropriately it is ineffective at
the least, creates inconsistency, lack of equity and does
not improve student achievement.
’ ’ ’ “ ” ” • – ” “ — ˜ ™ š ›
28. Classroom Management. The Ability to:
• Get and keep students on task
• Teach expectations
• Maintain high-positive to negative interaction
• Respond non-coercively to inappropriate behavior
• Avoid being trapped in one of the seven traps that many educators fall into.
Teaching Strategies. The Ability to:
• Implement lesson plans
Being Prepared/Professional. The Ability to:
• Arrival time, student supervision
Special Education. The Ability to:
• Adapt lessons, proper care, work with para’s
For example, the shorter the time between the beginning of class and when the students are involved in a
productive activity the better – skill #1 in classroom management. To evaluate this specific skill gives a good
clear understanding how well the substitute teacher is doing.
In observing school districts, evaluations methods fall into one of the following types:
• Observation
• The Day-After Feedback
• Self Evaluation
• Self Efficacy
• Survey
The Observation method is used by an administrator entering a classroom and rating the teacher’s performance
from excellent to unsatisfactory in the areas such as demonstrating punctuality, being neat and professional in
appearance and demeanor, following instructions, demonstrates clarity in verbal presentations, etc.
The Day-after feedback is where the permanent teacher answers questions based on what they find when they
return. Things such as if they implemented the lesson plans that were left, whether there were any behavioral
issues, was the room left a mess, and/or student feedback of the teacher.
The above evaluations tend not to improve performance of the substitute teacher but are to document what went
on while the permanent teacher is out of the classroom.
Self Evaluation can provide a more active role in improving one’s performance. For example, the individual is
asked to take data on their own actions and decide if it is a positive or negative interaction with students. The
substitute teacher would write down an interaction such as “Responded to Melissa when she spoke out of turn”
and indicate if it was positive or negative. The school district is only interested in the substitute teacher completes
the form for themselves and not necessarily how well the performance is graded.
Self Efficacy is another self-evaluation tool, but it looks at the individuals belief in their own ability to manage
a situation. For example, the questions are rated from having no influence on student behavior to a great deal of
influence with regards to how much they can do to get students on task after an assembly, or how much can you
do to communicate expectations to your students.
It is believed that when one’s belief in their own ability increases, student achievement increases as the teacher
tends to teach and manage better. Self efficacy is a powerful tool to monitor how well all the substitute teachers
are doing.
² ³ ´ µ ¶ · ¸ ¹ º » · ¼ » µ ¶ ² ½ ¾ ¿
h
29. http://www.stedi.org/subessentials
Geoffrey Smith founded the Substitute Teaching Institute at Utah State University in
1995, directing the institute until it spun off from the university in 2008. Mr. Smith is
currently the director of the Substitute Teaching Division of STEDI.org, which is under
license agreement with the university to continue its mission to “Revolutionize the role of
substitute teaching into an opportunity for educational excellence.”
Mr. Smith is the executive producer of the online training courses offered by STEDI.org,
including SubSkills Basic Online Training and SubWise eMentoring. Mr. Smith received
both a Master’s Degree in Educational Economics and a Public Sector MBA degree from
Utah State University.
Surveys, like the self efficacy method, gives a school district an overall view of all substitute teachers abilities and
attitudes. For example, survey results show that those individuals who indicated that they participated in a
refresher training course rated their ability in the area of classroom management as significantly higher than
those who did not participate in this form of training.
In addition, some surveys have shown that the longer the individual has been a substitute teacher, the LESS they
agree with the following statements “I feel”:
• The district places a high priority on substitute teachers,
• Welcomed and appreciated while substituting at most schools,
• That I have access to adequate resources to complete my educational tasks,
• Safe at school sites, and
• That school personnel support me throughout the day”.
When you begin to consider evaluating substitute teachers and if your goal is to improve performance and that
goal is well known, you’ll have greater chance of getting substitute teachers to participate and to improve their
performance. When they improve their performance, everyone wins!
If you would like samples of school districts evaluation forms please email me at Geoffrey.Smith@STEDI.org and
in the subject line put Substitute Teacher Evaluations.
http://www.stedi.org/subessentials
http://www.stedi.org/subessentials
http://www.stedi.org/subessentials
http://www.stedi.org/subessentials
The time to TRAINEVERY SUBSTITUTE TEACHER.
Learn More at STEDI.org/SubEssentials
Licensed by
Free 90-minute
course For
Substitute Teachers
32. À Á Â Ã Ä Å Æ Ç È É Å Ê É Ã Ä Á Ë Ì À
Í Î Ï Ð Ñ Î Ò Ó Ï Ô Õ Ñ Ð Ö ×
Pressures at Work
Principals often don’t “say it like it is” in teacher evaluations. Even with all the recent attention to enhanced
evaluation forms, protocols and online tools, a recent survey of several states showed that greater than 95%
percent of teacher evaluations still have the teacher ranked as effective or highly effective…an unrealistic number
in any profession. [NY Times; March 30, 2013]
A culture of “shadiness” frustrates other attempts at principal accountability, such as matching high stakes test
results to teacher evaluation marks. In Atlanta, the pressure related to standardized testing contributed to the
indictment of 35 educators for secretly changing students’ incorrect answers on tests. A third grade teacher, who
helped the prosecutors, spoke to the pressure, stating: “The cheating had been going on so long, we considered it
part of our jobs.” [TIME; April 15, 2013]
Further, principals may feel that the repercussions of honest evaluations are simply too great and the immediate
benefits too small. Strained working relationships, excessive time spent pursuing a plan of improvement, and no
immediate upgrade of the questionable performance, all make telling the truth feel like a bad idea. Many say it is
easier to pick your battles, give satisfactory ratings to all, and work off the record with those teachers with whom
you think you can make a difference.
Forms are Not the Answer
The challenge in having people be honest with one another in the evaluation process resides in the human
factor, not in forms and procedures. Many appraisal instruments are standardized and have check boxes to rate
the various performance measures, presumably making the process much more objective. However, the
individuals delivering the feedback have vastly different communication skills and are constrained by other
pressing responsibilities. Now the feedback may feel very subjective. If that wasn’t daunting enough, evaluators
have to confront their own very real fear of making mistakes and their fear of the reaction of others.
Many other current conditions create dread on both sides of the evaluation process. Scarcity of resources,
top-down demands from state education departments, the weight of societal needs on schools, too many
expectations and too little time, all take a daily toll. In the midst of this maelstrom of demands, the site
administrator is the one responsible for holding teachers accountable for student performance. It is not too hard
to see the “why” of evaluations that are less than truthful.
The Forward Path of Continuous Communication
Principals’ major operational tool is their working relationship with their staff. Given people’s often-intense
reaction to any feedback that is less than stellar, it is no surprise that principals are reluctant to rock the boat for
fear of hampering those relationships. Further, site administrators often have little training in maintaining the
necessary level of relatedness with people while providing constructive feedback on the performance of teachers,
some of whom are already so stressed about their work that they are willing to cheat to get by.
Site administrators will brave the possible consequences of honest feedback only if by doing so they can make a
significant difference to the education of the students and the operation of the entire school. They also need to
know they are being fair. This type of environment is possible if the accountability system recognizes the reality
of the human factor.