SlideShare a Scribd company logo
1 of 10
Download to read offline
Ikin & McClenaghan page 1/9
Modelling the influences of evaluation
on school principals:
Towards evaluation capacity building
School reviews form part of many school systems’ evaluation
frameworks, most often with the dual purpose of accountability
and improvement. With some modifications, such reviews also
have the potential as evaluation-capacity-building activities. This
paper demonstrates how to create evaluation capacity building;
combining formative and summative evaluation in school
reviews that can deliver both accountability and school
improvement. Using a traditional school-review structure as a
starting point, a new school-review strategy was developed that
engaged school principals as evaluators in ways that boosted
the evaluation capacity in their institutions. This capacity
comprised new skills, knowledge, and understandings that
became embedded in the culture and practices of their schools.
This paper breaks new ground by establishing the limitations of
a composite model of evaluation influence derived from current
literature. Further, it uses empirical data from field trials to
propose a new model that emphasises the role of catalytic
values and double-loop learning in situations where leaders
become transformational and formative evaluators in
empowered communities of practice.
Introduction
Structures for the evaluation of school performance and the balance in
these structures between accountability and improvement have been
the subject of much recent research and debate. For government
school systems in particular, the challenge has been to strike the right
balance between public-accountability and developmental processes
that influence school principals to transform their schools and build
evaluation capacity. One method of evaluation regularly featured in
system-level evaluation structures is that of a school review. A
participatory-action-research (PAR) project was undertaken in one
government school region in Sydney, New South Wales, to develop
and implement a whole-school review process that used school
principals in pivotal participatory evaluation roles.
The purpose of this paper is to report a study (conducted
concurrently with the PAR project) that examined the influence
experienced by the participating school principals. The paper presents
qualitative research using a case-study methodology. It is based on
data collected from multiple sources that tested the concept of
evaluation influence in relation to whole-school participatory
evaluations in government schools. The paper presents a provisional
theoretical model that was specifically designed to collect and map
Kerrie Ikin
Kerrie Ikin is an independent
evaluation consultant and has just
successfully completed a PhD
focusing on evaluation influence and
evaluation capacity building in the
UNE Business School at the
University of New England, Armidale,
NSW, Australia.
Email: <kerrie.ikin@gmail.com
Peter McClenaghan
Dr Peter McClenaghan is a Lecturer
in Organisational Behaviour and
Executive Leadership in the UNE
Business School, specialising in
leadership development and
organisational behaviour. He teaches
the Executive Leadership module in
the UNE MBA.
Email: <pmcclena@une.edu.au
Ikin, K. McClenaghan, P. 2015 Modelling the influences of evaluation on school principals:
Towards evaluation capacity building. Evaluation Journal of Australasia, 15, 1, pp.19-27.
Ikin & McClenaghan page 2/9
influence data as they occurred. It shows how an initial curiosity in
developing a model to untangle strings of influence led to new
knowledge about factors triggering influence, the types of evaluation
influences that occurred and when they occurred, and the processes
whereby evaluation itself was improved and evaluation capacity built.
In doing so, it emphasises the role of personal values as catalytic
conditions. The paper further shows how the findings about double-
loop learning cohere with notions of transformational leadership and
communities of practice as a way of explaining a more holistic
conceptualisation of evaluation influence. Finally, the paper proposes
a model that explains the broader causal story of evaluation influence
in a school-review setting.
Ikin & McClenaghan page 3/9
The Study in Context
From the early 2000s in NSW, system-supported
school reviews occurred by exception, in cases
where various data indicated underperformance
of a program or dysfunction in the school’s
management structures. There was no systemic
mechanism for all schools to participate in, or be
subject to, regular reviews of their whole-school
operation or to develop their own evaluation
capacity. Such a process was foreshadowed in
2003–04, when a component called ‘cyclic reviews’
appeared in a new framework for school
accountability, but no definition or specification of
what a cyclic review entailed was given, and no
further work on these cyclic reviews at a system-
wide level was pursued at that time.
One Regional Director (one of ten within the
education department in NSW at the time) took
the initiative in 2006 to develop the cyclic review
component of the framework for his region’s
schools and listed the development of what he
termed a regional ‘Cyclical Review’ process for
schools that would ‘strengthen school
development support and community confidence’
as a priority in the regional strategic plan. As a
result the Cyclical Review process, whereby
principals would be key participants in their own
schools’ evaluations, and in the evaluations of
other principals’ schools, was developed and
trialled over a four-year period. Figure 1 clarifies
the composition of review teams.
In order to examine the involvement of principals
in the evaluation processes, the concept of
evaluation influence was seen as particularly
relevant. Given that school improvement was seen
as a major reason for the introduction of Cyclical
Reviews, the use that the principals made of the
evaluation findings and whether they were
influenced by these findings was important. A
further matter, of equal importance, was the
influence that involvement in the review process
had on the principals’ skills in, understandings of,
and attitudes towards evaluation. This influence
was interpreted as being not only that experienced
by the principals as individuals, but also the
influence they experienced as school and system
leaders. Finally, if the Cyclical Reviews could be
seen as enhancing the skills in, understandings of,
attitudes to, and application of evaluation
processes and procedures by those involved, then
that enhancement could be defined as evaluation
capacity building (ECB).
The aim of the study was to gain impressions
from the principals about the factors, if any, that
had influenced them to use the evaluation
processes and results, both in the course of the
review and for at least twelve months afterwards.
The practical intention was to use this knowledge
to modify and strengthen the Cyclical Review
program in the region and, in so doing, contribute
to the implementation of that element of the
statewide accountability framework. Finally, it
was expected to add to current knowledge
concerning Cyclical Reviews both as theory and
policy.
Three research questions were posed:
1. What factors, prior experiences, and
understandings contributed to the influence
that the involvement in Cyclical Reviews had
on the participating principals?
2. How did participation in Cyclical Reviews
influence participating principals?
3. To what extent were the outcomes of ECB
demonstrated by the principals who
participated in the Cyclical Reviews?
The study compared two groups of principals.
FIGURE1: COMPOSITION OF REVIEW TEAMS
To develop a sense of ownership of the process and build the evaluation capacity of many more school and regional
staff and, in particular, the principals, each review team comprised:
 A principal as the team leader.
 A principal from another school in the region not closely connected to the host school.
 School staff, including executive staff, from within and beyond the school. The number of team members
depended on the size of the school being reviewed.
 A regional consultant, selected to match the perceived development needs of the school.
 Other personnel—for example, parents, personnel from special interest groups—could be members of the review
team, depending on the needs of the school.
 A school development officer (a senior officer experienced and trained in conducting school reviews) as an ex-
officio team member, acting as coach to the team leader and assisting in the review as required.
 In addition the host-school principal collaborated with the team-leader principal in the organisation of the review
and in providing additional information or clarification throughout the review as required.
Ikin & McClenaghan page 4/9
Group 1 (n = 9) was involved in the design of
Cyclical Review criteria and processes, hosting
reviews, and leading reviews. Group 2 (n = 9)
served only as team members on other principals’
reviews.
The approach required the development of a
model to classify the data being used to address
the research questions and interpret the findings
in terms of the relevant literature.
Provisional Theoretical Model of
Evaluation Influence
The model of evaluation influence (Figure 2)
was developed from the range of relevant
literature including, in particular, the models
proposed by Kirkhart (2000) and Mark and Henry
(2004). This tentative model classified the factors
that could trigger influence as context, human, and
evaluation, in line with Alkin’s (1985) categories. It
also conceptualised influence on four dimensions:
source (process and results); intention (intended
and unintended); level of analysis (principals as
individuals, as school leaders, and as system
leaders); and time (immediate, end-of-cycle, and
long term). Together these elements initially
appeared to explain the degree of influence in two
areas: the application of review findings and ECB.
While the theory and model derived from it
proved effective for data-gathering purposes, a
number of limitations were found with regard to
the analysis of patterns of influence, one of the key
factors identified as a core element from the
literature. First, the model as depicted did not
adequately represent how learning about
evaluation influences differed between the two
groups of principals involved in the study.
Second, it did not represent the catalytic
conditions that appeared to make a real difference
to the degree of both results and process-based
influence. While numerous factors and
subgroupings of factors were present, a set of
catalytic conditions per se was not identified in the
literature. Third, it did not recognise the role that
values and assumptions were found to play nor
the dynamics of structuration, whereby evaluation
itself was improved and evaluation capacity was
built. Fourth, while the provisional model enabled
an episodic mapping and analysis of the
microdynamics of influence, a more macro model
was required to map and explain the broader
causal story of influence found in the data about
Cyclical Reviews.
FIGURE 2: PROVISIONAL THEORETICAL MODEL FOR THE ANALYSIS OF EVALUATION INFLUENCE (ADAPTED FROM
KIRKHART 2000, P. 8, AND MARK AND HENRY 2004, P. 41)
Ikin & McClenaghan page 5/9
The basic schism in the findings was the
difference between the general experience of
influence experienced by Group 1 and Group 2
principals. Group 2 principals, who advised as
team members only in other principals’ schools,
were primarily concerned with problem-solving
when outcomes fell short of objectives. They
modified their evaluations and strategies and
techniques in order to make relatively ad hoc
improvements to their own evaluation practices
and capacity building. This was consistent with
the limits of single-loop learning as theorised by
Argyris and Schön (1978) and Senge et al. (2010).
In sharp contrast, Group 1 principals were
strongly influenced by having to understand,
apply, and improve criteria used in the evaluation
of their own schools, design evaluation strategies
and techniques, and later reflect critically on the
quality of evaluation practices and capacities in
order to improve the values and assumptions
influencing their own practices. These activities
illustrated the components of double-loop
learning. The basic structure of these two patterns
of influence is portrayed by the light and medium-
shaded panels in Figure 3.
Research Question 1: What factors,
prior experiences, and understandings
contributed to influence on the
participating principals?
The factors, prior experiences, and understandings
that contributed to influence on participating
principals were grouped into three categories—
context, human, and evaluation factors—in line
with existing theories from the literature. The
context factors found to trigger influence included
school culture, training, duration and timing, team
size and composition, and review focus. The
human factors found to trigger influence included
motivations, and the interaction between the
leadership of the review teams, the leadership by
host principals, and human relationships. The
evaluation factors found to trigger influence
included structures and resources, and the data-
collection processes and methods.
FIGURE 3: PROPOSED MODEL OF EVALUATION INFLUENCE
Single-loop learning
Compare Outcomes
with Objectives to
Advise Further
Improvements
(How we solve
problems)
Modified Evaluation
Strategies and
Techniques
(What we do)
Some ad hoc
improvement to
evidence-based
practice (results-
based)
Ad Hoc Improvement
to Evaluation
Practices (process-
based)
Some ECB
(What we get)
Evidence-based
Practice
(results-based)
Evaluation
Practices
(process-based)
Embedded ECB
(What we get)
Critical Reflection on
Embedded
Assumptions and
Catalytic Values
about Factors
Triggering Influence
(How we think about
evaluation)
Operational
Assumptions
(Why we
evaluate the
way we do)
Context factors
Catalytic Values
openness,
readiness, Clarity,
transparency
Catalytic Values
openness, trust, credibility,
competence, knowledge,
commitment, ownership
Catalytic Values
openness, clarity, consistency,
standards, consensus, engagement,
transparency, functionality
Human factors
Double-loop learning
Ikin & McClenaghan page 6/9
From the analysis and synthesis of findings it
was found that the evaluation methodology used
in Cyclical Reviews was primarily independent as
a process but differentially affected by context and
human factors in each school setting. The major
variances to standardised review processes were
the leadership services provided to the review
team, the host principal, and the partnership
between the review leader and the coach. The
conclusion drawn from this finding was that the
Cyclical Review evaluation process varied
substantially between schools owing to school
contextual factors and, in particular, by the
interaction of the human factors associated with
leadership. While this conclusion is itself an
interesting aspect of the relationship between the
three factors that trigger influence, it in turn now
leads to two further conclusions.
First, it can be theorised that the evaluation
factors incrementally influence the modification of
evaluation strategies and techniques used in each
setting, and can therefore be considered to be
intrinsic to the activities represented by the panel
labelled Modified Evaluation Strategies and
Techniques (What we do) presented in Figure 3.
Second, this means that context and human factors
interact with evaluation factors, and do so over
time as participants make improvements to their
evaluation practice. Their interactive relationship
is indicated in Figure 3 by the double-headed
arrows. This is consistent with prior theory about
single-loop and double-loop learning as proposed
by Argyris and Schön (1978). The key aspects of
single-loop and double-loop learning theory are
that through an entity’s double-loop-learning
attempts to reach a goal, the entity is able to
modify the goal in the light of experience or even
reject the goal. Through an entity’s single-loop-
learning attempts, however, the entity makes
repeated attempts at the same problem, with no
variation of method and without ever questioning
the goal. It is interesting that more recent
practitioners and theorists, such as Degenhardt
and Duignan (2010), Hargreaves (2013), and Stoll
(2013), take for granted and regularly apply the
ideas of double-loop learning.
Research Question 2: How did
participation in the reviews influence
participating principals?
The research findings led to the conclusion that
participation in Cyclical Reviews influenced
principals according to the role they played, in line
with participatory evaluation and its links with
ECB, much as proposed by Owen (2006, pp. 220–
221). The different degrees of deep learning, the
coherence of evidence collected and applied, and
the pace at which cultural change was embedded
were found to be related to the nature of the
principals’ participation in evaluation processes
and the subsequent acquisition of new personal
theories and practices.
These differences are shown explicitly in
Figure 3 as the differences between single-loop
and double-loop learning. One example consistent
with single-loop learning discussed in the
literature occurs when a participant connects a
strategy for action with a result and makes
corrections. If during single-loop learning an
action taken yields results that are different from
what was expected, the participant will observe
the results, automatically take in feedback, and try
a different approach. This trial-and-error approach
matches closely the learning activities that the
Group 2 principals tended to use.
In contrast, and consistent with double-loop
learning discussed in the literature, one example
of a participant’s action would be to go beyond
this basic approach and re-evaluate the deeper
governing variables that appear to explain why
participants behave in the ways they do. Re-
evaluating and reframing goals, values, and beliefs
are more complex ways of processing information,
involve more sophisticated ways of engaging with
an experience, and look at consequences from a
wider or more philosophical perspective. This
again matches closely the sorts of learning
activities and capacity-building strategies that the
Group 1 principals engaged in. This important
conclusion coheres with the exploratory works of
Scharmer (2009), and Scharmer and Kaufer (2013),
who have proposed ‘U learning’ (radically
reconceptualising and reconstructing
organisations) as critical in organisational
learning. Importantly, this form of learning closely
aligns with Wenger’s conceptualisation of learning
through communities of practice (Wenger,
McDermott et al. 2002, Wenger, White et al. 2009).
The essence of this form of organisational learning
is that it can take either a formal or informal form.
Thus, according to Wenger, learning can be
‘bootlegged’ through a circle of people ‘in the
know’ who are motivated to learn irrespective of
formal boundaries (Wenger, McDermott et al.
2002, p. 28).
Research Question 3: To what extent
were the outcomes of ECB
demonstrated by the participating
principals?
The outcomes of ECB were far more evident in
Group 1 principals over time than in Group 2
Ikin & McClenaghan page 7/9
principals, largely as a consequence of the extent
to which they engaged in deep learning about
evaluation and other double-loop-learning
activities.
Both groups of principals demonstrated ECB
to varying extents. At the outset there were many
similarities in the nature of learning about
evaluation for principals in both groups: most
notably their skills and understandings in
evaluation were enhanced, and they began to
demonstrate an improved ability to use evaluation
data and processes. Over time, however, there was
a marked difference between the two groups in the
extent to which ECB outcomes were
demonstrated.
By the end-of-cycle phase the bifurcation in
process-based influences, and hence the extent to
which principals had developed and
demonstrated evaluation capacity, had become
particularly marked. These differences were
primarily due to the rigour of the Cyclical Review
processes and the learning about evaluation that
they stimulated. Group 1 principals, in general,
had developed to a far greater extent the skills to
plan, design, collect, and analyse data and to
interpret and report results. The long-term
influences showed that Group 1 principals
demonstrated the outcomes of ECB identified in
the literature; they had embedded Cyclical Review
evaluation culture and practices into their schools.
In Stufflebeam’s (2002) terms, they had
institutionalised reforms. In Giddens’s (1984)
terms, they had learnt to manage structuration. In
contrast, Group 2 principals were still grappling
with the need to develop a comprehensive model
of evaluation with adoption characterised by the
ad hoc or incoherent use of methods. This implies
that Group 1 principals or other principals who
gain similar experiences and skills could become
coaches in future school-based cyclical-review
processes, offer additional coaching, or provide
other leadership roles in similar contexts in the
future.
Put another way, Group 1 principals used
double-loop learning to design and consolidate
their learning about Cyclical Reviews, while
Group 2 principals participated in another school’s
review without having the challenges and
advantages of gaining empirical feedback about
the effectiveness of their own understandings of
school evaluation. As a result, the early and deep
learning about evaluation processes experienced
by the Group 1 principals led over time to their
demonstrating ECB outcomes to a considerable
extent. Further, Group 1 principals engaging in
deep learning created both fresh structures in their
schools and fresh outcomes, including those of
new professional leadership capacities in
evaluation, and thereby illustrated another aspect
of Giddens’s (1984) structuration theory about the
duality of structure: it is both process and
outcome.
The markedly different extent to which
participation by the two groups resulted in ECB is
also illustrated in Figure 3.
Implications for Theory
Findings from the research led to the conclusion
that significant advances in the tentative theory for
this study were warranted, specifically the
incorporation of factors triggering influence into a
broader model of how principals learn about
evaluation and ECB. In this study, context and
human factors interacted with evaluation factors
for both groups of principals. Two other generic
conditions identified in the literature review
included relevance and timeliness.
What did not appear in the literature was the
importance of key values acting as catalytic
conditions during Cyclical Reviews. The values
that relate to the three factors triggering influence
are presented in Table 1.
While many of these values were recognised
as characteristics or factors in the prior research,
the analysis of data from this study led to the
implication for leadership theory on Cyclical
Reviews that these values act as threshold
conditions for change.
TABLE 1: CATALYTIC VALUES RELATED TO FACTORS
TRIGGERING INFLUENCE
Human factors Context factors
Evaluation
factors
openness openness openness
trust readiness clarity
credibility clarity consistency
competence transparency standards
knowledge consensus
commitment engagement
ownership transparency
functionality
While a full treatment of the role of catalytic
values was beyond the scope of this study, these
findings suggest that the nature, distribution, and
use of power in the context may mediate influence.
Ikin & McClenaghan page 8/9
A recent review of political ideologies in
educational leadership (Macpherson 2014) showed
that a tradition of pragmatism has been gradually
challenged by communitarianism, communicative
rationalism, and egalitarian liberalism. The
catalytic values in Table 1 are strongly indicative
of influential leadership, with some indications of
pragmatism, in a ‘school as a community’, given
the valuing of competence, standards, and
functionality. This implies that further research
could use political philosophy to identify and
evaluate how appropriately power is used to gain
influence during school-based reviews that are
cyclical. As discussed previously, the notion of
learning outcomes from cyclical reviews should
also consider the wider learning opportunities and
models that operate beyond the scope of the
cyclical framework, as evidenced by communities
of practice learning (Wenger, McDermott et al.
2002). Borzillo and colleagues (Borzillo, Schmitt et
al. 2012) claim this creates learning agility through
community interactivity adding weight to
Macpherson’s proposition of communitarianism.
The next theoretical issue concerned the
interplay between the three categories of factors.
The findings showed that the evaluation
methodology used in Cyclical Reviews was
primarily independent as a process but was
differentially affected by context and human
factors in each school setting.
Further to the above, a third implication was
that a major source of variance to standardised
review processes was likely to be the leadership
services provided to the review team, the
leadership by the host principal, and the
partnership between the review leader and the
coach. According to Dalton (2011), integrating
coaching into leadership development and
learning is taking centre stage in new forms of
organisational learning. A deeper understanding
of coaching as a learning model within the reviews
would deepen learning opportunities and feed
into double-loop-learning and triple-loop-learning
frameworks.
From the review of the literature, Cyclical
Reviews were found to be a form of participatory
evaluation with all of the features of PAR. ECB
was found to be an organisational learning process
intended to make routine quality evaluation
concerned with continuous improvement. As a
consequence of uncovering the very different
experiences of Group 1 and Group 2 principals
and the extent to which these experiences
impacted on evaluation practices and capacity
building, the fourth implication was that school-
based reviews that are cyclical should be theorised
as PAR and organisational double-loop learning.
The above have considerable strategic
implications for school leaders in their
professional development and their use,
development, and implementation of evaluation
policy. A key outcome of getting better value from
the evaluations is to enhance the leadership
capability and performance of the participating
principals, whatever their role. Therefore a further
implication for theory is that leaders of school-
based reviews that are cyclical will need
professional development to learn the theoretical
and practical implications of double-loop learning.
The full model presented in Figure 3 shows all
the elements of evaluation influence derived from
the study: single-loop and double-loop learning;
factors triggering influence—human and context—
with evaluation factors being subsumed in the
double-loop-learning and single-loop-learning
process Modified Evaluation Strategies and
techniques (What we do); and the catalytic values,
shown in clouds.
Implications for Policy
While the study was embedded in a specific
system’s context, the findings suggest a cautious
generic implication for systems of schools.
The implication was the need for this type of
school review to retain all the features of PAR,
which implied the need to organise reviews
around cohorts of principals and for each future
cohort to engage in experiential learning processes
through designing, participating, and then
reflecting critically on the quality of evaluation
and the degree of capacity building achieved. This
need also means that offering principals external
validation services, even using other principals,
does not necessarily provide them with the
double-loop-learning opportunities required to
guarantee evidence-based evaluation practice and
ECB. One consideration here, however, is to
consider framing the reviews within more
formalised communities of practice, as suggested
by Wenger (Wenger, McDermott et al. 2002), the
idea here being to open up learning pathways that
may have been repressed in more traditional
structured learning models.
Implications for Practice
From the analysis and synthesis of findings and in
line with the research literature, the quality and
relevance of evaluation practice, in all of its forms,
was found to exert influence in three broad ways.
First was the extent to which Cyclical Reviews
were efficiently and effectively conducted. Second
Ikin & McClenaghan page 9/9
was the extent to which skills, understandings,
and evaluation capacity of the principals
participating in the reviews were developed. Third
was the extent to which review findings and
processes in the schools of the participating
principals were implemented.
Practice in relation to planning processes was
also found to be effective and efficient in that
reviews were conducted on time, were completed
following the planned procedures, and used the
information-technology resources secured for the
purpose. The review teams applied themselves
with energy and conviction, and the schools under
review appeared to have appreciated the process
and the resulting recommendations for school and
staff development. Refinements to a number of the
processes were made by the principals as they
developed evaluation skills, knowledge, and
understandings. Further, by time that the study
was concluding advances in technologies had led
the researcher and a number of participants to
consider adopting newer, more efficient, and more
effective data-gathering and sorting methods. This
is, again, consistent with double-loop learning.
A further area of practice was that of
professional learning and accreditation. It was
found that the principals not only acquired new
evaluation skills and knowledge but recognised
and valued these qualities in others and believed
that the reviews were more effective when all
involved—coaches, host principals, team leaders,
and team members—were trained specifically for
their roles.
A final area of practice concerned the
composition and selection of review teams. It was
found that the selection of staff or parents from the
same school learning community could lead to
cross-fertilisation of ideas and more cooperative
planning between schools.
Conclusion
This paper has provided an illustrative example of
evaluation in practice, whereby evaluation can be
incorporated into school practices as part of their
routine organisational, planning, and management
practices.
It has demonstrated that the influence that the
Cyclical Review process had on participating
school principals varied with involvement. It was
found that the principals who advised only as
team members in other principals’ schools were
primarily concerned with problem-solving when
outcomes fell short of objectives. This is consistent
with the limits of single-loop learning. Principals
who had to understand and apply criteria in the
evaluation of their own schools, design evaluation
strategies and techniques, and later reflect
critically on the quality of evaluation practices and
capacities in order to improve the values and
assumptions influencing their own practices were
shown to engage in deep learning. That is, they
were engaged in a kind of learning that fully
integrated an experiential learning cycle of
experiencing, reflecting, thinking, and acting. This
is consistent with double-loop learning.
Catalytic values mediated the impact of
context, human, and evaluation factors, acting as
threshold conditions for change. This raises
important questions as to how deeply engaged
participants in evaluations may be prepared to
explore. Values that mediate a learning mindset at
the individual and group level are well known in
the extant leadership literature and are regarded
as critical in terms of understanding, at a deeper
level, the power of values alignment in these
spheres. Importantly, this finding suggests that the
nature, influence, distribution and use of power
within participatory evaluation contexts may
mediate influence.
Theory, policy, practice, and further research
into school-based cyclical reviews should all be
refined to reflect the principles of double-loop
learning beyond traditional and formalised
evaluation boundaries. Learning, in effect,
becomes a pervasive transformative process
within and beyond the learning (evaluation)
community of interest.
References
Alkin, MC (1985), A Guide for Evaluation Decision Makers,
Beverly Hills, CA, Sage.
Argyris, C & Schön, DA (1978), Organizational Learning:
A theory of action perspective, Reading MA, Addison-
Wesley.
Borzillo, S et al. (2012), ‘Communities of practice:
keeping the company agile’, Journal of Business
Strategy, vol. 33, no. 6, pp. 22–30.
Dalton, K (2011), Leadership and management development,
Harlow, UK, Pearson.
Degenhardt, L & Duignan, P (2010), Dancing on a
Shifting Carpet: Reinventing Traditional Schooling for
the 21st Century, Camberwell, Vic., ACER Press.
Giddens, A (1984), The Constitution of Society: Outline of a
Theory of Structuration, Berkeley and Los Angeles,
University of California Press.
Ikin & McClenaghan page 10/9
Hargreaves, A (2013), ‘5 I's of Educational Change’,
School Leadership Symposium 2013, Zug, Switzerland,
26–28 September 2013, Zug, Switzerland.
Kirkhart, K (2000), ‘Reconceptualizing evaluation use:
an integrated theory of influence’, In VJ Caracelli &
H Preskill (eds), The Expanding Scope of Evaluation
Use. New Directions for Evaluation, vol. 88, pp. 5–24,
San Francisco, Jossey-Bass.
Macpherson, RJS (2014), Political Philosophy, Educational
Administration and Educative Leadership. London,
Routledge.
Mark, MM & Henry GT (2004), ‘The mechanisms and
outcomes of evaluation influence’, Evaluation, Vol.
10, pp. 35–57, DOI: 10.1177/1356389004042326
Owen, JM (2006), Program Evaluation: Forms and
Approaches, Crows Nest, Australia, Allen and
Unwin.
Scharmer, CO (2009), Theory U: Leading from the Future as
it Emerges, San Francisco, Berrett-Koehler
Publishers
Scharmer, CO & Kaufer, K (2013), Leading from the
Emerging Future: From Ego-System to Eco-System
Economics, San Francisco, Berrett-Koehler
Publishers.
Senge, PM et al. (2010), The Necessary Revolution: How
Individuals And Organizations Are Working Together
to Create a Sustainable World, New York, Broadway
Books.
Stoll, L. (2013), ‘Creativity in Leadership: Issues and
Possibilities’, Paper presented at the School
Leadership Symposium 2013, Zug, Switzerland, 26–28
September 2013, Zug, Switzerland, viewed 21
November 2013,
<http://www.edulead.net/2013/speakers.php>
Stufflebeam, DL (2002), Institutionalizing evaluation
checklist, viewed 23 October 2009,
<http://www.wmich.edu/eva;ctr/checklists/instit
utionalizingeval.htm>
Wenger, E et al. (2002), Cultivating communities of
practice, Boston, Harvard Business Press.
Wenger, E et al. (2009), Digital habitats: Stewarding
technology for communities, Portland, CPsquare.

More Related Content

What's hot

Evaluating the Curriculum
Evaluating the CurriculumEvaluating the Curriculum
Evaluating the CurriculumJerwin Lopez
 
International Journal of Business and Management Invention (IJBMI)
International Journal of Business and Management Invention (IJBMI)International Journal of Business and Management Invention (IJBMI)
International Journal of Business and Management Invention (IJBMI)inventionjournals
 
The Science of Professional Development
The Science of Professional DevelopmentThe Science of Professional Development
The Science of Professional DevelopmentBrent Daigle, Ph.D.
 
Dela cruz meaning of evaluation
Dela cruz  meaning of evaluationDela cruz  meaning of evaluation
Dela cruz meaning of evaluationYouise Saculo
 
Educational supervision presentation
Educational supervision presentationEducational supervision presentation
Educational supervision presentationuniversity of karachi
 
Fullan change model
Fullan change modelFullan change model
Fullan change modelCt Hajar
 
Predictors of Success: Student Achievement in Schools
Predictors of Success: Student Achievement in SchoolsPredictors of Success: Student Achievement in Schools
Predictors of Success: Student Achievement in SchoolsSchool Improvement Network
 
Harvard-IOP-Education-Policy-Fall-2015
Harvard-IOP-Education-Policy-Fall-2015Harvard-IOP-Education-Policy-Fall-2015
Harvard-IOP-Education-Policy-Fall-2015Catherine Zhang
 
Teacher’s Performance Appraisal System Using Fuzzy Logic- A Case Study
Teacher’s Performance Appraisal System Using Fuzzy Logic- A Case StudyTeacher’s Performance Appraisal System Using Fuzzy Logic- A Case Study
Teacher’s Performance Appraisal System Using Fuzzy Logic- A Case Studyrahulmonikasharma
 
Formative Assessment as an Essential Competence of University Teachers
Formative Assessment as an Essential Competence of University TeachersFormative Assessment as an Essential Competence of University Teachers
Formative Assessment as an Essential Competence of University Teachersiosrjce
 
Predictors of teacher collaboration
Predictors of teacher collaborationPredictors of teacher collaboration
Predictors of teacher collaborationijejournal
 
Administration and Supervision in Evaluation
Administration and Supervision in EvaluationAdministration and Supervision in Evaluation
Administration and Supervision in EvaluationSharon Geroquia
 
J.1748 8583.2003.tb00082.x
J.1748 8583.2003.tb00082.xJ.1748 8583.2003.tb00082.x
J.1748 8583.2003.tb00082.xGetahun Tesfaye
 
Indicator Approach and Role Approach
Indicator Approach and Role ApproachIndicator Approach and Role Approach
Indicator Approach and Role ApproachVaibhav Verma
 
Measuring Educational Interventions in Context
Measuring Educational Interventions in Context Measuring Educational Interventions in Context
Measuring Educational Interventions in Context Amy Cassata, PhD
 

What's hot (19)

System model of curriculum
System model of curriculumSystem model of curriculum
System model of curriculum
 
Evaluating the Curriculum
Evaluating the CurriculumEvaluating the Curriculum
Evaluating the Curriculum
 
International Journal of Business and Management Invention (IJBMI)
International Journal of Business and Management Invention (IJBMI)International Journal of Business and Management Invention (IJBMI)
International Journal of Business and Management Invention (IJBMI)
 
The Science of Professional Development
The Science of Professional DevelopmentThe Science of Professional Development
The Science of Professional Development
 
Dela cruz meaning of evaluation
Dela cruz  meaning of evaluationDela cruz  meaning of evaluation
Dela cruz meaning of evaluation
 
Educational supervision presentation
Educational supervision presentationEducational supervision presentation
Educational supervision presentation
 
Fullan change model
Fullan change modelFullan change model
Fullan change model
 
Supervision and evaluation
Supervision and evaluationSupervision and evaluation
Supervision and evaluation
 
Predictors of Success: Student Achievement in Schools
Predictors of Success: Student Achievement in SchoolsPredictors of Success: Student Achievement in Schools
Predictors of Success: Student Achievement in Schools
 
Ppt0000006
Ppt0000006Ppt0000006
Ppt0000006
 
Harvard-IOP-Education-Policy-Fall-2015
Harvard-IOP-Education-Policy-Fall-2015Harvard-IOP-Education-Policy-Fall-2015
Harvard-IOP-Education-Policy-Fall-2015
 
Teacher’s Performance Appraisal System Using Fuzzy Logic- A Case Study
Teacher’s Performance Appraisal System Using Fuzzy Logic- A Case StudyTeacher’s Performance Appraisal System Using Fuzzy Logic- A Case Study
Teacher’s Performance Appraisal System Using Fuzzy Logic- A Case Study
 
Formative Assessment as an Essential Competence of University Teachers
Formative Assessment as an Essential Competence of University TeachersFormative Assessment as an Essential Competence of University Teachers
Formative Assessment as an Essential Competence of University Teachers
 
Predictors of teacher collaboration
Predictors of teacher collaborationPredictors of teacher collaboration
Predictors of teacher collaboration
 
Administration and Supervision in Evaluation
Administration and Supervision in EvaluationAdministration and Supervision in Evaluation
Administration and Supervision in Evaluation
 
J.1748 8583.2003.tb00082.x
J.1748 8583.2003.tb00082.xJ.1748 8583.2003.tb00082.x
J.1748 8583.2003.tb00082.x
 
Indicator Approach and Role Approach
Indicator Approach and Role ApproachIndicator Approach and Role Approach
Indicator Approach and Role Approach
 
Evaluation
EvaluationEvaluation
Evaluation
 
Measuring Educational Interventions in Context
Measuring Educational Interventions in Context Measuring Educational Interventions in Context
Measuring Educational Interventions in Context
 

Viewers also liked

Slideshare. darling alvarez. procesal laboral
Slideshare. darling alvarez. procesal laboralSlideshare. darling alvarez. procesal laboral
Slideshare. darling alvarez. procesal laboralsaraidalvarez
 
Cultivating Community School Food Garden Brochure
Cultivating Community School Food Garden BrochureCultivating Community School Food Garden Brochure
Cultivating Community School Food Garden BrochureCultivating Community
 
surat kelulusan program
surat kelulusan programsurat kelulusan program
surat kelulusan programRaihan Najihah
 
DINI NA SIASA ZINAVYO IMALIZA TANZANIA
DINI NA SIASA ZINAVYO IMALIZA TANZANIADINI NA SIASA ZINAVYO IMALIZA TANZANIA
DINI NA SIASA ZINAVYO IMALIZA TANZANIAanzawe makweta
 
Welcome to the northridge middle 2
Welcome to the northridge middle 2Welcome to the northridge middle 2
Welcome to the northridge middle 2Lisa Pixley
 
LinkedIn Festival Hackathon Project
LinkedIn Festival Hackathon Project LinkedIn Festival Hackathon Project
LinkedIn Festival Hackathon Project Teresa Chuang
 
III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)
III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)
III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)Ecommaster
 
technology and globolisation
technology and globolisationtechnology and globolisation
technology and globolisationAnshika Garg
 
Emanuele Scafato "IL BERE IN ITALIA"
Emanuele Scafato "IL BERE IN ITALIA"Emanuele Scafato "IL BERE IN ITALIA"
Emanuele Scafato "IL BERE IN ITALIA"Emanuele Scafato
 
Haier presentation ravi (2)
Haier presentation ravi (2)Haier presentation ravi (2)
Haier presentation ravi (2)Ravi Gupta
 

Viewers also liked (17)

Slideshare. darling alvarez. procesal laboral
Slideshare. darling alvarez. procesal laboralSlideshare. darling alvarez. procesal laboral
Slideshare. darling alvarez. procesal laboral
 
Cultivating Community School Food Garden Brochure
Cultivating Community School Food Garden BrochureCultivating Community School Food Garden Brochure
Cultivating Community School Food Garden Brochure
 
Talpos sebastian
Talpos sebastianTalpos sebastian
Talpos sebastian
 
surat kelulusan program
surat kelulusan programsurat kelulusan program
surat kelulusan program
 
L’automobile
L’automobileL’automobile
L’automobile
 
DINI NA SIASA ZINAVYO IMALIZA TANZANIA
DINI NA SIASA ZINAVYO IMALIZA TANZANIADINI NA SIASA ZINAVYO IMALIZA TANZANIA
DINI NA SIASA ZINAVYO IMALIZA TANZANIA
 
Welcome to the northridge middle 2
Welcome to the northridge middle 2Welcome to the northridge middle 2
Welcome to the northridge middle 2
 
LinkedIn Festival Hackathon Project
LinkedIn Festival Hackathon Project LinkedIn Festival Hackathon Project
LinkedIn Festival Hackathon Project
 
OpenVMS Today
OpenVMS TodayOpenVMS Today
OpenVMS Today
 
III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)
III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)
III Congreso Ecommaster - Moda y Ecommerce (Elena Vidal)
 
technology and globolisation
technology and globolisationtechnology and globolisation
technology and globolisation
 
Redacción de items
Redacción de itemsRedacción de items
Redacción de items
 
Emanuele Scafato "IL BERE IN ITALIA"
Emanuele Scafato "IL BERE IN ITALIA"Emanuele Scafato "IL BERE IN ITALIA"
Emanuele Scafato "IL BERE IN ITALIA"
 
Presentacion acrosport
Presentacion acrosportPresentacion acrosport
Presentacion acrosport
 
1000 ถามตอบ
1000 ถามตอบ1000 ถามตอบ
1000 ถามตอบ
 
Lesson 7 Quality Management
Lesson 7 Quality ManagementLesson 7 Quality Management
Lesson 7 Quality Management
 
Haier presentation ravi (2)
Haier presentation ravi (2)Haier presentation ravi (2)
Haier presentation ravi (2)
 

Similar to Ikin McClenaghan formatted JAN 5_15

Assessment For Learning Effects And Impact
Assessment For Learning  Effects And ImpactAssessment For Learning  Effects And Impact
Assessment For Learning Effects And ImpactAndrea Porter
 
Assessing Service-Learning
Assessing Service-LearningAssessing Service-Learning
Assessing Service-LearningKristen Flores
 
PROF ED 7 PPT.pptx
PROF ED 7 PPT.pptxPROF ED 7 PPT.pptx
PROF ED 7 PPT.pptxDessAlla
 
บทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์ ศุภโสภณ
บทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์  ศุภโสภณบทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์  ศุภโสภณ
บทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์ ศุภโสภณKobwit Piriyawat
 
2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdf2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdfRebecca665273
 
WASA Central Office Leadership Framework
WASA Central Office Leadership FrameworkWASA Central Office Leadership Framework
WASA Central Office Leadership Frameworkailenebaxter
 
Comprehensive Leadership Development, By Mike Heffner and Sid Haro
Comprehensive Leadership Development, By Mike Heffner and Sid HaroComprehensive Leadership Development, By Mike Heffner and Sid Haro
Comprehensive Leadership Development, By Mike Heffner and Sid HaroSid Haro
 
Teacher Eval Report FINAL
Teacher Eval Report FINALTeacher Eval Report FINAL
Teacher Eval Report FINALDanielle Glazer
 
Pass institute checklist project
Pass institute checklist projectPass institute checklist project
Pass institute checklist projectreneeot
 
Week 12_Designing Instructional Materials and Conducting Summative Evaluation...
Week 12_Designing Instructional Materials and Conducting Summative Evaluation...Week 12_Designing Instructional Materials and Conducting Summative Evaluation...
Week 12_Designing Instructional Materials and Conducting Summative Evaluation...RalphNavelino3
 
Program Outcomes Retrospective Paper- ACE Capstone Experience
Program Outcomes Retrospective Paper- ACE Capstone ExperienceProgram Outcomes Retrospective Paper- ACE Capstone Experience
Program Outcomes Retrospective Paper- ACE Capstone ExperienceSydneyHendricks2
 
Curriculum Evaluation.pptx
Curriculum Evaluation.pptxCurriculum Evaluation.pptx
Curriculum Evaluation.pptxAlmeraAkmad
 
Curriculum Evaluation.pptx
Curriculum Evaluation.pptxCurriculum Evaluation.pptx
Curriculum Evaluation.pptxAlmeraAkmad
 
What philosophical assumptions drive the teacher/teaching standards movement ...
What philosophical assumptions drive the teacher/teaching standards movement ...What philosophical assumptions drive the teacher/teaching standards movement ...
What philosophical assumptions drive the teacher/teaching standards movement ...Ferry Tanoto
 
Seeking Empirical Validity in an Assurance of Learning System
Seeking Empirical Validity in an Assurance of Learning SystemSeeking Empirical Validity in an Assurance of Learning System
Seeking Empirical Validity in an Assurance of Learning SystemRochell McWhorter
 

Similar to Ikin McClenaghan formatted JAN 5_15 (20)

Assessment For Learning Effects And Impact
Assessment For Learning  Effects And ImpactAssessment For Learning  Effects And Impact
Assessment For Learning Effects And Impact
 
Assessing Service-Learning
Assessing Service-LearningAssessing Service-Learning
Assessing Service-Learning
 
PROF ED 7 PPT.pptx
PROF ED 7 PPT.pptxPROF ED 7 PPT.pptx
PROF ED 7 PPT.pptx
 
บทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์ ศุภโสภณ
บทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์  ศุภโสภณบทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์  ศุภโสภณ
บทความวิจัยดุษฎีบัณฑิต สาขาบริหารการศึกษา ภาคภาษาอังกฤษ ผอ.เชิดศักดิ์ ศุภโสภณ
 
2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdf2022_eat_framework_-aug_.pdf
2022_eat_framework_-aug_.pdf
 
WASA Central Office Leadership Framework
WASA Central Office Leadership FrameworkWASA Central Office Leadership Framework
WASA Central Office Leadership Framework
 
Comprehensive Leadership Development, By Mike Heffner and Sid Haro
Comprehensive Leadership Development, By Mike Heffner and Sid HaroComprehensive Leadership Development, By Mike Heffner and Sid Haro
Comprehensive Leadership Development, By Mike Heffner and Sid Haro
 
Performance appraisal and review
Performance appraisal and reviewPerformance appraisal and review
Performance appraisal and review
 
Effective-teaching-2013
 Effective-teaching-2013 Effective-teaching-2013
Effective-teaching-2013
 
Ed546794
Ed546794Ed546794
Ed546794
 
Teacher Eval Report FINAL
Teacher Eval Report FINALTeacher Eval Report FINAL
Teacher Eval Report FINAL
 
Pass institute checklist project
Pass institute checklist projectPass institute checklist project
Pass institute checklist project
 
Week 12_Designing Instructional Materials and Conducting Summative Evaluation...
Week 12_Designing Instructional Materials and Conducting Summative Evaluation...Week 12_Designing Instructional Materials and Conducting Summative Evaluation...
Week 12_Designing Instructional Materials and Conducting Summative Evaluation...
 
Program Outcomes Retrospective Paper- ACE Capstone Experience
Program Outcomes Retrospective Paper- ACE Capstone ExperienceProgram Outcomes Retrospective Paper- ACE Capstone Experience
Program Outcomes Retrospective Paper- ACE Capstone Experience
 
Curriculum Evaluation.pptx
Curriculum Evaluation.pptxCurriculum Evaluation.pptx
Curriculum Evaluation.pptx
 
Curriculum Evaluation.pptx
Curriculum Evaluation.pptxCurriculum Evaluation.pptx
Curriculum Evaluation.pptx
 
What philosophical assumptions drive the teacher/teaching standards movement ...
What philosophical assumptions drive the teacher/teaching standards movement ...What philosophical assumptions drive the teacher/teaching standards movement ...
What philosophical assumptions drive the teacher/teaching standards movement ...
 
Using Educational Effectiveness research to Design Theory-Driven Evaluation A...
Using Educational Effectiveness research to Design Theory-Driven Evaluation A...Using Educational Effectiveness research to Design Theory-Driven Evaluation A...
Using Educational Effectiveness research to Design Theory-Driven Evaluation A...
 
Seeking Empirical Validity in an Assurance of Learning System
Seeking Empirical Validity in an Assurance of Learning SystemSeeking Empirical Validity in an Assurance of Learning System
Seeking Empirical Validity in an Assurance of Learning System
 
A DYNAMIC APPROACH TO SCHOOL IMPROVEMENT: MAIN FEATURES AND IMPACT ON STUDEN...
A DYNAMIC APPROACH TO SCHOOL IMPROVEMENT: MAIN FEATURES AND IMPACT ON STUDEN...A DYNAMIC APPROACH TO SCHOOL IMPROVEMENT: MAIN FEATURES AND IMPACT ON STUDEN...
A DYNAMIC APPROACH TO SCHOOL IMPROVEMENT: MAIN FEATURES AND IMPACT ON STUDEN...
 

Ikin McClenaghan formatted JAN 5_15

  • 1. Ikin & McClenaghan page 1/9 Modelling the influences of evaluation on school principals: Towards evaluation capacity building School reviews form part of many school systems’ evaluation frameworks, most often with the dual purpose of accountability and improvement. With some modifications, such reviews also have the potential as evaluation-capacity-building activities. This paper demonstrates how to create evaluation capacity building; combining formative and summative evaluation in school reviews that can deliver both accountability and school improvement. Using a traditional school-review structure as a starting point, a new school-review strategy was developed that engaged school principals as evaluators in ways that boosted the evaluation capacity in their institutions. This capacity comprised new skills, knowledge, and understandings that became embedded in the culture and practices of their schools. This paper breaks new ground by establishing the limitations of a composite model of evaluation influence derived from current literature. Further, it uses empirical data from field trials to propose a new model that emphasises the role of catalytic values and double-loop learning in situations where leaders become transformational and formative evaluators in empowered communities of practice. Introduction Structures for the evaluation of school performance and the balance in these structures between accountability and improvement have been the subject of much recent research and debate. For government school systems in particular, the challenge has been to strike the right balance between public-accountability and developmental processes that influence school principals to transform their schools and build evaluation capacity. One method of evaluation regularly featured in system-level evaluation structures is that of a school review. A participatory-action-research (PAR) project was undertaken in one government school region in Sydney, New South Wales, to develop and implement a whole-school review process that used school principals in pivotal participatory evaluation roles. The purpose of this paper is to report a study (conducted concurrently with the PAR project) that examined the influence experienced by the participating school principals. The paper presents qualitative research using a case-study methodology. It is based on data collected from multiple sources that tested the concept of evaluation influence in relation to whole-school participatory evaluations in government schools. The paper presents a provisional theoretical model that was specifically designed to collect and map Kerrie Ikin Kerrie Ikin is an independent evaluation consultant and has just successfully completed a PhD focusing on evaluation influence and evaluation capacity building in the UNE Business School at the University of New England, Armidale, NSW, Australia. Email: <kerrie.ikin@gmail.com Peter McClenaghan Dr Peter McClenaghan is a Lecturer in Organisational Behaviour and Executive Leadership in the UNE Business School, specialising in leadership development and organisational behaviour. He teaches the Executive Leadership module in the UNE MBA. Email: <pmcclena@une.edu.au Ikin, K. McClenaghan, P. 2015 Modelling the influences of evaluation on school principals: Towards evaluation capacity building. Evaluation Journal of Australasia, 15, 1, pp.19-27.
  • 2. Ikin & McClenaghan page 2/9 influence data as they occurred. It shows how an initial curiosity in developing a model to untangle strings of influence led to new knowledge about factors triggering influence, the types of evaluation influences that occurred and when they occurred, and the processes whereby evaluation itself was improved and evaluation capacity built. In doing so, it emphasises the role of personal values as catalytic conditions. The paper further shows how the findings about double- loop learning cohere with notions of transformational leadership and communities of practice as a way of explaining a more holistic conceptualisation of evaluation influence. Finally, the paper proposes a model that explains the broader causal story of evaluation influence in a school-review setting.
  • 3. Ikin & McClenaghan page 3/9 The Study in Context From the early 2000s in NSW, system-supported school reviews occurred by exception, in cases where various data indicated underperformance of a program or dysfunction in the school’s management structures. There was no systemic mechanism for all schools to participate in, or be subject to, regular reviews of their whole-school operation or to develop their own evaluation capacity. Such a process was foreshadowed in 2003–04, when a component called ‘cyclic reviews’ appeared in a new framework for school accountability, but no definition or specification of what a cyclic review entailed was given, and no further work on these cyclic reviews at a system- wide level was pursued at that time. One Regional Director (one of ten within the education department in NSW at the time) took the initiative in 2006 to develop the cyclic review component of the framework for his region’s schools and listed the development of what he termed a regional ‘Cyclical Review’ process for schools that would ‘strengthen school development support and community confidence’ as a priority in the regional strategic plan. As a result the Cyclical Review process, whereby principals would be key participants in their own schools’ evaluations, and in the evaluations of other principals’ schools, was developed and trialled over a four-year period. Figure 1 clarifies the composition of review teams. In order to examine the involvement of principals in the evaluation processes, the concept of evaluation influence was seen as particularly relevant. Given that school improvement was seen as a major reason for the introduction of Cyclical Reviews, the use that the principals made of the evaluation findings and whether they were influenced by these findings was important. A further matter, of equal importance, was the influence that involvement in the review process had on the principals’ skills in, understandings of, and attitudes towards evaluation. This influence was interpreted as being not only that experienced by the principals as individuals, but also the influence they experienced as school and system leaders. Finally, if the Cyclical Reviews could be seen as enhancing the skills in, understandings of, attitudes to, and application of evaluation processes and procedures by those involved, then that enhancement could be defined as evaluation capacity building (ECB). The aim of the study was to gain impressions from the principals about the factors, if any, that had influenced them to use the evaluation processes and results, both in the course of the review and for at least twelve months afterwards. The practical intention was to use this knowledge to modify and strengthen the Cyclical Review program in the region and, in so doing, contribute to the implementation of that element of the statewide accountability framework. Finally, it was expected to add to current knowledge concerning Cyclical Reviews both as theory and policy. Three research questions were posed: 1. What factors, prior experiences, and understandings contributed to the influence that the involvement in Cyclical Reviews had on the participating principals? 2. How did participation in Cyclical Reviews influence participating principals? 3. To what extent were the outcomes of ECB demonstrated by the principals who participated in the Cyclical Reviews? The study compared two groups of principals. FIGURE1: COMPOSITION OF REVIEW TEAMS To develop a sense of ownership of the process and build the evaluation capacity of many more school and regional staff and, in particular, the principals, each review team comprised:  A principal as the team leader.  A principal from another school in the region not closely connected to the host school.  School staff, including executive staff, from within and beyond the school. The number of team members depended on the size of the school being reviewed.  A regional consultant, selected to match the perceived development needs of the school.  Other personnel—for example, parents, personnel from special interest groups—could be members of the review team, depending on the needs of the school.  A school development officer (a senior officer experienced and trained in conducting school reviews) as an ex- officio team member, acting as coach to the team leader and assisting in the review as required.  In addition the host-school principal collaborated with the team-leader principal in the organisation of the review and in providing additional information or clarification throughout the review as required.
  • 4. Ikin & McClenaghan page 4/9 Group 1 (n = 9) was involved in the design of Cyclical Review criteria and processes, hosting reviews, and leading reviews. Group 2 (n = 9) served only as team members on other principals’ reviews. The approach required the development of a model to classify the data being used to address the research questions and interpret the findings in terms of the relevant literature. Provisional Theoretical Model of Evaluation Influence The model of evaluation influence (Figure 2) was developed from the range of relevant literature including, in particular, the models proposed by Kirkhart (2000) and Mark and Henry (2004). This tentative model classified the factors that could trigger influence as context, human, and evaluation, in line with Alkin’s (1985) categories. It also conceptualised influence on four dimensions: source (process and results); intention (intended and unintended); level of analysis (principals as individuals, as school leaders, and as system leaders); and time (immediate, end-of-cycle, and long term). Together these elements initially appeared to explain the degree of influence in two areas: the application of review findings and ECB. While the theory and model derived from it proved effective for data-gathering purposes, a number of limitations were found with regard to the analysis of patterns of influence, one of the key factors identified as a core element from the literature. First, the model as depicted did not adequately represent how learning about evaluation influences differed between the two groups of principals involved in the study. Second, it did not represent the catalytic conditions that appeared to make a real difference to the degree of both results and process-based influence. While numerous factors and subgroupings of factors were present, a set of catalytic conditions per se was not identified in the literature. Third, it did not recognise the role that values and assumptions were found to play nor the dynamics of structuration, whereby evaluation itself was improved and evaluation capacity was built. Fourth, while the provisional model enabled an episodic mapping and analysis of the microdynamics of influence, a more macro model was required to map and explain the broader causal story of influence found in the data about Cyclical Reviews. FIGURE 2: PROVISIONAL THEORETICAL MODEL FOR THE ANALYSIS OF EVALUATION INFLUENCE (ADAPTED FROM KIRKHART 2000, P. 8, AND MARK AND HENRY 2004, P. 41)
  • 5. Ikin & McClenaghan page 5/9 The basic schism in the findings was the difference between the general experience of influence experienced by Group 1 and Group 2 principals. Group 2 principals, who advised as team members only in other principals’ schools, were primarily concerned with problem-solving when outcomes fell short of objectives. They modified their evaluations and strategies and techniques in order to make relatively ad hoc improvements to their own evaluation practices and capacity building. This was consistent with the limits of single-loop learning as theorised by Argyris and Schön (1978) and Senge et al. (2010). In sharp contrast, Group 1 principals were strongly influenced by having to understand, apply, and improve criteria used in the evaluation of their own schools, design evaluation strategies and techniques, and later reflect critically on the quality of evaluation practices and capacities in order to improve the values and assumptions influencing their own practices. These activities illustrated the components of double-loop learning. The basic structure of these two patterns of influence is portrayed by the light and medium- shaded panels in Figure 3. Research Question 1: What factors, prior experiences, and understandings contributed to influence on the participating principals? The factors, prior experiences, and understandings that contributed to influence on participating principals were grouped into three categories— context, human, and evaluation factors—in line with existing theories from the literature. The context factors found to trigger influence included school culture, training, duration and timing, team size and composition, and review focus. The human factors found to trigger influence included motivations, and the interaction between the leadership of the review teams, the leadership by host principals, and human relationships. The evaluation factors found to trigger influence included structures and resources, and the data- collection processes and methods. FIGURE 3: PROPOSED MODEL OF EVALUATION INFLUENCE Single-loop learning Compare Outcomes with Objectives to Advise Further Improvements (How we solve problems) Modified Evaluation Strategies and Techniques (What we do) Some ad hoc improvement to evidence-based practice (results- based) Ad Hoc Improvement to Evaluation Practices (process- based) Some ECB (What we get) Evidence-based Practice (results-based) Evaluation Practices (process-based) Embedded ECB (What we get) Critical Reflection on Embedded Assumptions and Catalytic Values about Factors Triggering Influence (How we think about evaluation) Operational Assumptions (Why we evaluate the way we do) Context factors Catalytic Values openness, readiness, Clarity, transparency Catalytic Values openness, trust, credibility, competence, knowledge, commitment, ownership Catalytic Values openness, clarity, consistency, standards, consensus, engagement, transparency, functionality Human factors Double-loop learning
  • 6. Ikin & McClenaghan page 6/9 From the analysis and synthesis of findings it was found that the evaluation methodology used in Cyclical Reviews was primarily independent as a process but differentially affected by context and human factors in each school setting. The major variances to standardised review processes were the leadership services provided to the review team, the host principal, and the partnership between the review leader and the coach. The conclusion drawn from this finding was that the Cyclical Review evaluation process varied substantially between schools owing to school contextual factors and, in particular, by the interaction of the human factors associated with leadership. While this conclusion is itself an interesting aspect of the relationship between the three factors that trigger influence, it in turn now leads to two further conclusions. First, it can be theorised that the evaluation factors incrementally influence the modification of evaluation strategies and techniques used in each setting, and can therefore be considered to be intrinsic to the activities represented by the panel labelled Modified Evaluation Strategies and Techniques (What we do) presented in Figure 3. Second, this means that context and human factors interact with evaluation factors, and do so over time as participants make improvements to their evaluation practice. Their interactive relationship is indicated in Figure 3 by the double-headed arrows. This is consistent with prior theory about single-loop and double-loop learning as proposed by Argyris and Schön (1978). The key aspects of single-loop and double-loop learning theory are that through an entity’s double-loop-learning attempts to reach a goal, the entity is able to modify the goal in the light of experience or even reject the goal. Through an entity’s single-loop- learning attempts, however, the entity makes repeated attempts at the same problem, with no variation of method and without ever questioning the goal. It is interesting that more recent practitioners and theorists, such as Degenhardt and Duignan (2010), Hargreaves (2013), and Stoll (2013), take for granted and regularly apply the ideas of double-loop learning. Research Question 2: How did participation in the reviews influence participating principals? The research findings led to the conclusion that participation in Cyclical Reviews influenced principals according to the role they played, in line with participatory evaluation and its links with ECB, much as proposed by Owen (2006, pp. 220– 221). The different degrees of deep learning, the coherence of evidence collected and applied, and the pace at which cultural change was embedded were found to be related to the nature of the principals’ participation in evaluation processes and the subsequent acquisition of new personal theories and practices. These differences are shown explicitly in Figure 3 as the differences between single-loop and double-loop learning. One example consistent with single-loop learning discussed in the literature occurs when a participant connects a strategy for action with a result and makes corrections. If during single-loop learning an action taken yields results that are different from what was expected, the participant will observe the results, automatically take in feedback, and try a different approach. This trial-and-error approach matches closely the learning activities that the Group 2 principals tended to use. In contrast, and consistent with double-loop learning discussed in the literature, one example of a participant’s action would be to go beyond this basic approach and re-evaluate the deeper governing variables that appear to explain why participants behave in the ways they do. Re- evaluating and reframing goals, values, and beliefs are more complex ways of processing information, involve more sophisticated ways of engaging with an experience, and look at consequences from a wider or more philosophical perspective. This again matches closely the sorts of learning activities and capacity-building strategies that the Group 1 principals engaged in. This important conclusion coheres with the exploratory works of Scharmer (2009), and Scharmer and Kaufer (2013), who have proposed ‘U learning’ (radically reconceptualising and reconstructing organisations) as critical in organisational learning. Importantly, this form of learning closely aligns with Wenger’s conceptualisation of learning through communities of practice (Wenger, McDermott et al. 2002, Wenger, White et al. 2009). The essence of this form of organisational learning is that it can take either a formal or informal form. Thus, according to Wenger, learning can be ‘bootlegged’ through a circle of people ‘in the know’ who are motivated to learn irrespective of formal boundaries (Wenger, McDermott et al. 2002, p. 28). Research Question 3: To what extent were the outcomes of ECB demonstrated by the participating principals? The outcomes of ECB were far more evident in Group 1 principals over time than in Group 2
  • 7. Ikin & McClenaghan page 7/9 principals, largely as a consequence of the extent to which they engaged in deep learning about evaluation and other double-loop-learning activities. Both groups of principals demonstrated ECB to varying extents. At the outset there were many similarities in the nature of learning about evaluation for principals in both groups: most notably their skills and understandings in evaluation were enhanced, and they began to demonstrate an improved ability to use evaluation data and processes. Over time, however, there was a marked difference between the two groups in the extent to which ECB outcomes were demonstrated. By the end-of-cycle phase the bifurcation in process-based influences, and hence the extent to which principals had developed and demonstrated evaluation capacity, had become particularly marked. These differences were primarily due to the rigour of the Cyclical Review processes and the learning about evaluation that they stimulated. Group 1 principals, in general, had developed to a far greater extent the skills to plan, design, collect, and analyse data and to interpret and report results. The long-term influences showed that Group 1 principals demonstrated the outcomes of ECB identified in the literature; they had embedded Cyclical Review evaluation culture and practices into their schools. In Stufflebeam’s (2002) terms, they had institutionalised reforms. In Giddens’s (1984) terms, they had learnt to manage structuration. In contrast, Group 2 principals were still grappling with the need to develop a comprehensive model of evaluation with adoption characterised by the ad hoc or incoherent use of methods. This implies that Group 1 principals or other principals who gain similar experiences and skills could become coaches in future school-based cyclical-review processes, offer additional coaching, or provide other leadership roles in similar contexts in the future. Put another way, Group 1 principals used double-loop learning to design and consolidate their learning about Cyclical Reviews, while Group 2 principals participated in another school’s review without having the challenges and advantages of gaining empirical feedback about the effectiveness of their own understandings of school evaluation. As a result, the early and deep learning about evaluation processes experienced by the Group 1 principals led over time to their demonstrating ECB outcomes to a considerable extent. Further, Group 1 principals engaging in deep learning created both fresh structures in their schools and fresh outcomes, including those of new professional leadership capacities in evaluation, and thereby illustrated another aspect of Giddens’s (1984) structuration theory about the duality of structure: it is both process and outcome. The markedly different extent to which participation by the two groups resulted in ECB is also illustrated in Figure 3. Implications for Theory Findings from the research led to the conclusion that significant advances in the tentative theory for this study were warranted, specifically the incorporation of factors triggering influence into a broader model of how principals learn about evaluation and ECB. In this study, context and human factors interacted with evaluation factors for both groups of principals. Two other generic conditions identified in the literature review included relevance and timeliness. What did not appear in the literature was the importance of key values acting as catalytic conditions during Cyclical Reviews. The values that relate to the three factors triggering influence are presented in Table 1. While many of these values were recognised as characteristics or factors in the prior research, the analysis of data from this study led to the implication for leadership theory on Cyclical Reviews that these values act as threshold conditions for change. TABLE 1: CATALYTIC VALUES RELATED TO FACTORS TRIGGERING INFLUENCE Human factors Context factors Evaluation factors openness openness openness trust readiness clarity credibility clarity consistency competence transparency standards knowledge consensus commitment engagement ownership transparency functionality While a full treatment of the role of catalytic values was beyond the scope of this study, these findings suggest that the nature, distribution, and use of power in the context may mediate influence.
  • 8. Ikin & McClenaghan page 8/9 A recent review of political ideologies in educational leadership (Macpherson 2014) showed that a tradition of pragmatism has been gradually challenged by communitarianism, communicative rationalism, and egalitarian liberalism. The catalytic values in Table 1 are strongly indicative of influential leadership, with some indications of pragmatism, in a ‘school as a community’, given the valuing of competence, standards, and functionality. This implies that further research could use political philosophy to identify and evaluate how appropriately power is used to gain influence during school-based reviews that are cyclical. As discussed previously, the notion of learning outcomes from cyclical reviews should also consider the wider learning opportunities and models that operate beyond the scope of the cyclical framework, as evidenced by communities of practice learning (Wenger, McDermott et al. 2002). Borzillo and colleagues (Borzillo, Schmitt et al. 2012) claim this creates learning agility through community interactivity adding weight to Macpherson’s proposition of communitarianism. The next theoretical issue concerned the interplay between the three categories of factors. The findings showed that the evaluation methodology used in Cyclical Reviews was primarily independent as a process but was differentially affected by context and human factors in each school setting. Further to the above, a third implication was that a major source of variance to standardised review processes was likely to be the leadership services provided to the review team, the leadership by the host principal, and the partnership between the review leader and the coach. According to Dalton (2011), integrating coaching into leadership development and learning is taking centre stage in new forms of organisational learning. A deeper understanding of coaching as a learning model within the reviews would deepen learning opportunities and feed into double-loop-learning and triple-loop-learning frameworks. From the review of the literature, Cyclical Reviews were found to be a form of participatory evaluation with all of the features of PAR. ECB was found to be an organisational learning process intended to make routine quality evaluation concerned with continuous improvement. As a consequence of uncovering the very different experiences of Group 1 and Group 2 principals and the extent to which these experiences impacted on evaluation practices and capacity building, the fourth implication was that school- based reviews that are cyclical should be theorised as PAR and organisational double-loop learning. The above have considerable strategic implications for school leaders in their professional development and their use, development, and implementation of evaluation policy. A key outcome of getting better value from the evaluations is to enhance the leadership capability and performance of the participating principals, whatever their role. Therefore a further implication for theory is that leaders of school- based reviews that are cyclical will need professional development to learn the theoretical and practical implications of double-loop learning. The full model presented in Figure 3 shows all the elements of evaluation influence derived from the study: single-loop and double-loop learning; factors triggering influence—human and context— with evaluation factors being subsumed in the double-loop-learning and single-loop-learning process Modified Evaluation Strategies and techniques (What we do); and the catalytic values, shown in clouds. Implications for Policy While the study was embedded in a specific system’s context, the findings suggest a cautious generic implication for systems of schools. The implication was the need for this type of school review to retain all the features of PAR, which implied the need to organise reviews around cohorts of principals and for each future cohort to engage in experiential learning processes through designing, participating, and then reflecting critically on the quality of evaluation and the degree of capacity building achieved. This need also means that offering principals external validation services, even using other principals, does not necessarily provide them with the double-loop-learning opportunities required to guarantee evidence-based evaluation practice and ECB. One consideration here, however, is to consider framing the reviews within more formalised communities of practice, as suggested by Wenger (Wenger, McDermott et al. 2002), the idea here being to open up learning pathways that may have been repressed in more traditional structured learning models. Implications for Practice From the analysis and synthesis of findings and in line with the research literature, the quality and relevance of evaluation practice, in all of its forms, was found to exert influence in three broad ways. First was the extent to which Cyclical Reviews were efficiently and effectively conducted. Second
  • 9. Ikin & McClenaghan page 9/9 was the extent to which skills, understandings, and evaluation capacity of the principals participating in the reviews were developed. Third was the extent to which review findings and processes in the schools of the participating principals were implemented. Practice in relation to planning processes was also found to be effective and efficient in that reviews were conducted on time, were completed following the planned procedures, and used the information-technology resources secured for the purpose. The review teams applied themselves with energy and conviction, and the schools under review appeared to have appreciated the process and the resulting recommendations for school and staff development. Refinements to a number of the processes were made by the principals as they developed evaluation skills, knowledge, and understandings. Further, by time that the study was concluding advances in technologies had led the researcher and a number of participants to consider adopting newer, more efficient, and more effective data-gathering and sorting methods. This is, again, consistent with double-loop learning. A further area of practice was that of professional learning and accreditation. It was found that the principals not only acquired new evaluation skills and knowledge but recognised and valued these qualities in others and believed that the reviews were more effective when all involved—coaches, host principals, team leaders, and team members—were trained specifically for their roles. A final area of practice concerned the composition and selection of review teams. It was found that the selection of staff or parents from the same school learning community could lead to cross-fertilisation of ideas and more cooperative planning between schools. Conclusion This paper has provided an illustrative example of evaluation in practice, whereby evaluation can be incorporated into school practices as part of their routine organisational, planning, and management practices. It has demonstrated that the influence that the Cyclical Review process had on participating school principals varied with involvement. It was found that the principals who advised only as team members in other principals’ schools were primarily concerned with problem-solving when outcomes fell short of objectives. This is consistent with the limits of single-loop learning. Principals who had to understand and apply criteria in the evaluation of their own schools, design evaluation strategies and techniques, and later reflect critically on the quality of evaluation practices and capacities in order to improve the values and assumptions influencing their own practices were shown to engage in deep learning. That is, they were engaged in a kind of learning that fully integrated an experiential learning cycle of experiencing, reflecting, thinking, and acting. This is consistent with double-loop learning. Catalytic values mediated the impact of context, human, and evaluation factors, acting as threshold conditions for change. This raises important questions as to how deeply engaged participants in evaluations may be prepared to explore. Values that mediate a learning mindset at the individual and group level are well known in the extant leadership literature and are regarded as critical in terms of understanding, at a deeper level, the power of values alignment in these spheres. Importantly, this finding suggests that the nature, influence, distribution and use of power within participatory evaluation contexts may mediate influence. Theory, policy, practice, and further research into school-based cyclical reviews should all be refined to reflect the principles of double-loop learning beyond traditional and formalised evaluation boundaries. Learning, in effect, becomes a pervasive transformative process within and beyond the learning (evaluation) community of interest. References Alkin, MC (1985), A Guide for Evaluation Decision Makers, Beverly Hills, CA, Sage. Argyris, C & Schön, DA (1978), Organizational Learning: A theory of action perspective, Reading MA, Addison- Wesley. Borzillo, S et al. (2012), ‘Communities of practice: keeping the company agile’, Journal of Business Strategy, vol. 33, no. 6, pp. 22–30. Dalton, K (2011), Leadership and management development, Harlow, UK, Pearson. Degenhardt, L & Duignan, P (2010), Dancing on a Shifting Carpet: Reinventing Traditional Schooling for the 21st Century, Camberwell, Vic., ACER Press. Giddens, A (1984), The Constitution of Society: Outline of a Theory of Structuration, Berkeley and Los Angeles, University of California Press.
  • 10. Ikin & McClenaghan page 10/9 Hargreaves, A (2013), ‘5 I's of Educational Change’, School Leadership Symposium 2013, Zug, Switzerland, 26–28 September 2013, Zug, Switzerland. Kirkhart, K (2000), ‘Reconceptualizing evaluation use: an integrated theory of influence’, In VJ Caracelli & H Preskill (eds), The Expanding Scope of Evaluation Use. New Directions for Evaluation, vol. 88, pp. 5–24, San Francisco, Jossey-Bass. Macpherson, RJS (2014), Political Philosophy, Educational Administration and Educative Leadership. London, Routledge. Mark, MM & Henry GT (2004), ‘The mechanisms and outcomes of evaluation influence’, Evaluation, Vol. 10, pp. 35–57, DOI: 10.1177/1356389004042326 Owen, JM (2006), Program Evaluation: Forms and Approaches, Crows Nest, Australia, Allen and Unwin. Scharmer, CO (2009), Theory U: Leading from the Future as it Emerges, San Francisco, Berrett-Koehler Publishers Scharmer, CO & Kaufer, K (2013), Leading from the Emerging Future: From Ego-System to Eco-System Economics, San Francisco, Berrett-Koehler Publishers. Senge, PM et al. (2010), The Necessary Revolution: How Individuals And Organizations Are Working Together to Create a Sustainable World, New York, Broadway Books. Stoll, L. (2013), ‘Creativity in Leadership: Issues and Possibilities’, Paper presented at the School Leadership Symposium 2013, Zug, Switzerland, 26–28 September 2013, Zug, Switzerland, viewed 21 November 2013, <http://www.edulead.net/2013/speakers.php> Stufflebeam, DL (2002), Institutionalizing evaluation checklist, viewed 23 October 2009, <http://www.wmich.edu/eva;ctr/checklists/instit utionalizingeval.htm> Wenger, E et al. (2002), Cultivating communities of practice, Boston, Harvard Business Press. Wenger, E et al. (2009), Digital habitats: Stewarding technology for communities, Portland, CPsquare.