How International Is Our School? MA Dissertation

580 views

Published on

Title: A pilot-test of a visualization and set of evaluation rubrics for factors affecting the promotion of international-mindedness and global engagement (IMaGE) of a school.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
580
On SlideShare
0
From Embeds
0
Number of Embeds
225
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

How International Is Our School? MA Dissertation

  1. 1. Stephen Taylor (sjtylr) University of Bath MA Education (International Education) ! 1! ! UNIVERSITY OF BATH DEPARTMENT OF EDUCATION HOW INTERNATIONAL IS OUR SCHOOL? A pilot-test of a visualization and set of evaluation rubrics for factors affecting the promotion of international-mindedness and global engagement (IMaGE) of a school. by STEPHEN JAMES TAYLOR July 12 2016 Dissertation for the award of MA Education (International Education). Word Count: 16,302 This dissertation has been edited lightly to activate hyperlinks in the references, and to remove the university submission coversheet. This paper is hosted on my personal professional blog at: https://ibiologystephen.wordpress.com/2017/01/15/how-international-is-our-school-ma-dissertation Contact: Stephen@i-biology.net or on Twitter: @sjtylr
  2. 2. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 2! Abstract Developing ideas proposed in the Education in an International Context (EIC) paper in 2012, and adapting a research design from my Research Methods in Education (RME) paper in 2015, I sought to create, pilot-test and evaluate a set of tools that would allow a school to reliably evaluate and compare the various characteristics of its organization and practices that promote international mindedness and global engagement (IMaGE). A literature review was used to create a working definition of IMaGE, and a set of eight descriptive rubrics for the various characteristics (radials) and their contributing elements, were generated. Based on a 0-4 scale of degrees of quality, the rubrics were used in a small-scale cases- study of my own international school in Japan as a pilot test. Using faculty volunteers (n=10), with the outcomes generating a ‘web chart’ graphical representation of the IMaGE, some preliminary conclusions about the IMaGE of the school were drawn, along with evaluation of the IMaGE tools themselves. The outcomes for the school were positive, though muted, with methodological limitations restricting the reliability of any conclusion regarding the state of the IMaGE of our school. Results suggested a need for more targeted professional development and consistent use of language regarding IMaGE across the school. Although evaluation of the web chart for its intended purpose was encouraging, implications for the further development of the IMaGE tools include a need for further testing with larger sample sizes, a change in approach to a committee-based self-study model and significant further development (and possible redistribution) of the radials and contributing elements. There was significant concern about the tools’ usefulness with parent or non-educator audiences. Through future development of the IMaGE tools, there may potential application in collaboration with other evaluative protocols or professional development. As this naturally leads towards more focused description and standardization, I have become conflicted about the IMaGE tool’s potential role in homogenizing international education, and would like to carefully reconsider its purpose before developing it further.
  3. 3. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 3! Acknowledgements A huge thank-you to Prof. Mary Hayden for her ongoing support and guidance through this project and through the MA programme at the University of Bath. Thanks also to Dr. John Lowe and Prof. Jeff Thompson for their enthusiasm at the inception of the idea in the EIC unit in the summer of 2012, and to Dr. Kyoko Murakami for her guidance in the RME unit in 2015. Thank-you also to Luiz Zorzo of the International Schools’ Association for providing me with the most recent ISA self-study tool for reference and to my school – in particular my ten willing volunteers – for providing the inspiration and context for the study. Most importantly of all, thank-you to my family, for the hugs, patience, space, time, food and love throughout this MA programme. Author Declaration 1.! The author, Stephen Taylor, has not been registered for any other academic award during the period of registration for this study. 2.! The material included in this dissertation has not been submitted wholly or in part for any other academic award. 3.! The programme of advanced study of which this dissertation is part (MA International Education) has included completion of the following units: a.! Assessment b.! Curriculum Studies c.! Understanding Learners and Learning d.! Education in an International Context e.! Research Methods in Education 4.! Where my material has been previously submitted as part of an assignment within any of these units, it has been clearly identified. Stephen Taylor July 12 2016
  4. 4. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 4! Contents Chapter 1: Introduction 1.1.! Context 6 1.2.! Rationale for the Research 8 1.3.! Overview of the Research 10 1.3.1.! Research Questions 11 1.3.2.! Overview of Chapters 11 Chapter 2: Literature Review 2.1!Defining International-Mindedness and Global Engagement (IMaGE) 12 2.2!Measuring International-Mindedness and Global Engagement (IMaGE) 17 2.3!Evolving the web-chart 21 2.4!Influences on the IMaGE of a school: radials and elements 24 Chapter 3: Research Design 3.1!Timeline 30 3.2!Research Strategy 30 3.3!Research Design 32 3.4!Participants (Research Volunteers) 33 3.5!Methods and Procedures of Data Collection 35 3.6!Validity & Reliability 41 3.7!Methods of Data Analysis 43 3.8!Ethical Issues 44 3.9!Conclusions 45
  5. 5. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 5! Chapter 4: Data & Analysis 4.1!The IMaGE our international school 46 4.2!Analysis of findings: unpacking the radials 49 4.3!Analysis of findings: evaluating the web-chart 55 Chapter 5: Conclusions of the Pilot Study 5.1!Limitations of the research design 59 5.2!Discussion of findings 60 5.2.1! Implications for our international school 60 5.2.2! Implications for the IMaGE project as a tool 62 5.3!Recommendations & Further Research 64 5.4!Final Conclusions & Reflections 65 References 67 Appendices Appendix 1: Ethics & voluntary informed consent form 78 Appendix 2: Prototype evaluation rubrics for the eight radials of the web 79 Appendix 3: Survey evaluation and feedback questionnaire 89 Appendix 4: Volunteer rubric responses (raw and processed data) 93 Appendix 5: Volunteer survey evaluation responses (raw and processed data) 96
  6. 6. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 6! Chapter 1: Introduction 1.1 Context “Is every school an international school?” That is a question I set out to explore through my Education in in an International Context (EIC) unit for the University of Bath, resulting in the inception of a web-chart of the dimensions of a school that promote international mindedness (IM) and global engagement (GE), or “IMaGE” (Taylor 2013). The idea behind the web chart was to create a ‘visual definition’ of the international dimension of schools, a tool with which a school might be able to self-assess and present their relative qualities in terms of the promotion of IMaGE. Where the shape of a spider’s web is the ‘fingerprint’ of its species (Zschokke & Vollrath 1995), the shape of a school’s IMaGE web-chart is unique; its international dimension strengthened by connections and tensions between its radials (Taylor 2013). I chose the acronym IMaGE in a deliberate attempt to communicate a collection of ideas in one simple term. First, it serves as a ‘catch-all’ term for defining the essence of international education, including the individual and school-wide states of mind of international mindedness and the observable actions of global engagement. Secondly, through the development and application of a series of rubrics, I propose a that the various components of schooling that contribute to the development of IMaGE can be measured, and thus represented graphically in a form that clearly presents the levels of quality of a school that might be used in place of lengthy descriptions or definitions that might not be accessible to all stakeholders. I use the phrase ‘the IMaGE of a school’ as shorthand for ‘a visual definition of the degree of quality of eight contributing factors (radials) in the development and promotion of international mindedness and global engagement of its learners, as determined through the application of eight descriptive rubrics’, with IMaGE itself defined as “the thoughts and actions of an individual with an informed ecological worldview, based on intercultural understanding, global competence and empathy, resulting in purposeful, ethical and future-oriented engagement with issues of local, national and global importance,” a definition which will be justified
  7. 7. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 7! in Chapter 1.1. The ‘IMaGE tool’ referred to throughout the paper encompasses the eight rubrics and the resultant web-chart, clarified below. Also known as a radar chart (Tague 2005, in Taylor 2015) the web-chart is a graphic representation that communicates multiple degrees of quality of contributing factors in a concentric shape, with the outer levels representing a higher degree of quality (Fig. 1). As noted in my EIC paper, the degree of quality reflects the extent to which a school evidences that characteristic or trait (after Nikel & Lowe, 2010), and must not be confused with a judgement of the relative standard of ‘goodness’ of the school (a school can be educationally strong without being ‘international’). As in Hill’s 2006 model of cultural interaction, a higher rating for the degree of quality of a radial gives a greater shaded area on the web, moving out from the centre (see Fig. 1). Through overlapping of and interaction between the elements for the radials, Furthermore, the ‘fabric’ of education is reflected, or “stretched or maintained in tension” (Nikel & Lowe 2010). Important definitions to consider in reading this study relate to construction of the web-chart and the research design. In keeping with the web analogy, the eight contributing factors to the chart are known as radials (Zschokke & Vollrath 1995; Taylor 2013). Participants in the pilot-test are known as volunteers. In the initial stage of the pilot-test, volunteers gave a quick pre-test assessment of their perceived degree of quality of each of the eight radials, to be used later in evaluating the tools. Between five and seven elements were identified through the literature review as contributing to each of the radials; each of the elements has been assigned descriptors on a scale of 0 (‘not at all’) to 4 (‘very high’) (or ‘I don’t know’) in an evaluation rubric for each radial. After volunteers assigned a degree of quality for each element in a radial, they gave an overall best-fit rating for the radial, which was recorded in the sample population data and used to create the IMaGE of the school.
  8. 8. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 8! Figure 1. Summarizing the IMaGE tools: the web-chart and supporting terminology 1.2 Rationale for Research I wrote in my EIC paper that as we see the rapid growth of international education (Hill 2015; Hayden 2011), and are better equipped to study and evaluate the qualities that make an education international (Cambridge 2003; Cambridge & Carthew 2007; Cambridge & Thompson 2001; Hill 2000; Lowe 1998), so we also see challenges of identifying and describing types of schools in a more crowded marketplace (Haywood 2007; Yamato 2003). Consequently, “teachers and global
  9. 9. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 9! educationalists are currently drowning in a sea of seemingly similar terms” (Marshall 2007). There is great variability between schools offering apparently similar experiences (Haywood 2007; Hayden 2011; Yamato 2003), and schools can become accredited, authorized or evaluated by a range of agencies (Fertig 2007, 2015; Murphy 2012). With clear descriptors for comparative purposes (Taylor 2013), any school might apply the IMaGE tools, and each school would show a different pattern of degrees of quality: for example, a multicultural international school, a local school offering an ‘internationalised’ curriculum, an ‘overseas school’ of one nation or a national school with a high immigrant population would each have a unique IMaGE. In other words, “an international school may offer an education that makes no claims to be international, while an international education may be experienced by a student that has attended a school that has not identified itself as international” (Hayden & Thompson 1995), Necessarily limited to the international dimensions of a school, the tool seeks only to visualize the degrees of quality (characteristics) of the various radials (factors) that contribute to the school’s IMaGE; quality here being used in a descriptive yet non-judgemental sense (Nikel & Lowe 2010), rather than Alderman and Brown’s (2005) “public and independent affirmation” as part of a high-stakes external audit. Where it is possible that adherence to too restrictive a set of standards might lead to isomorphism (schools becoming similar through competition), it is not a normative scale in which some schools are ‘better than’ others (Shields 2015), but which illustrates a school’s unique IMaGE in relation to others. I proposed that potential uses for the tools might include tracking a school’s development of IMaGE over time, as a component of self-study for external bodies, as a comparative tool for schools in a similar area or simply as an internal audit for a school exploring issues of international education (Taylor 2013); “a culturally unbiased instrument or process for measuring IM is [also] needed in the field.” (Duckworth, Levy & Levy 2005). Since completing the EIC paper in 2012, I have come across further sources and developments that have helped mature the web-chart visualization, including the updated and comprehensive SAGE Handbook of Research in International Education (Hayden, Levy & Thompson 2015), Miri Yemini’s 2012 explorations of internationalization assessment in schools, Singh & Qi’s
  10. 10. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 10! (2013) thorough, IB-funded research into 21st Century international-mindedness and Gerhad Muller’s 2012 EdD thesis on factors affecting IM in schools. Furthermore, my role as an IB Middle Years Programme (MYP) Coordinator, whole-school Director of Learning and school visitor and consultant for the IB Asia Pacific region, I am intimate with the IB standards and practices (IBO 2014, currently under review), as well as the Council of International Schools’ (CIS 2016) standards for accreditation. These outcomes, however, may not be understandable by all stakeholders; from personal experience, it has been necessary to add extra visualisation and explanation to the outcomes of my own school’s, programme evaluation report in order to make sense for parents and avoid ‘focusing on the negatives’. I will refer to all of these sources, among others, in my literature review of the topic in Chapter 2. In conclusion, my goal for this project was to use the literature review to develop a prototype set of rubrics for the IMaGE of a school, pilot-testing them for validity, reliability and operability. I developed a small-scale case-study of my own school, using my findings to make recommendations for my school and for the evaluation of the IMaGE tools. 1.3 Overview of the Research Through this research, I aimed to develop the IMaGE tools and apply them in a pilot test, using my own school as a small-scale case-study to generate feedback on reliability, validity and operability, modified from my RME paper (Taylor 2015). In doing, so, I interrogated the literature on the factors that affect the IMaGE of a school, making adjustments to the model where necessary and using critical analysis to develop an underlying set of prototype rubrics for the evaluation of each radial. I then conducted a pilot-test using a group of ten faculty volunteers to evaluate our context and analyse the data, producing a web chart for our international school in Japan. Using a quantitative (survey-based) then qualitative (commentary) mixed-methods approach, the case-study generated sample web-charts for the IMaGE of the school, a PreK-12 international day and boarding school of 600 students of around 35 nationalities in Kobe, Japan. It was hoped that this case-study might help lay some foundations for the school’s further IMaGE development and evidence collection in time for a synchronised CIS and IB evaluation visit in 2019.
  11. 11. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 11! 1.3.1 Research Questions In summary, this study investigates the following primary research question: •! To what extent does the web-chart and its supporting rubrics generate a useful, reliable and operable tool for self-evaluation and visualization of the IMaGE of a school? A sub-question, regarding the outcome of the pilot-test as a limited case-study of our context is: •! What preliminary conclusions can we draw regarding the IMaGE of our school, from this pilot-test? 1.3.1 Overview of Upcoming Chapters Through the literature review in chapter 2, I put the research in a wider context of international school evaluation, justifying the choice of the web radials and the design of the rubric. I critically evaluate key literature on factors affecting the IMaGE of a school, resulting in the production of a working prototype set of evaluation rubrics (presented in full in Appendix 2). The research design and tools are clarified in Chapter 3, including choices of survey instruments, justification for choices of methodology and statistical tests. In Chapter 4, I process, present and analyse the collected data, giving comparisons of pre-test and best-fit population mean data for the eight radials and generating a preliminary web-chart “IMaGE for our international school”, supported by tables for individual radial elements, as well as analysis participant feedback data on the design of the tool. This data analysis informs the conclusions of Chapter 5, leading to a critique of the research design, overall outcomes and implications of the study, and closing recommendations for further research.
  12. 12. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 12! Chapter 2: Literature Review With such a rapidly-developing field of research, this could be a constantly fluid project, adapting and evolving as the landscape of international education shifts. In this chapter, I review literature on definitions of IMaGE, contemporary attempts and protocols used to measure or evaluate IMaGE and key literature regarding factors that influence the development of IMaGE in a school. The combined findings of the literature review will form the basis for the development of the prototype IMaGE rubrics (Appendix 2). 2.1 Defining International Mindedness & Global Engagement Although the International Schools Association’s (ISA 2011) self-study guide for internationalism in education “offers no preconceived definitions or interpretation of internationalism or international-mindedness”, instead opting to allow the school to “speak to and for itself”, Swain (2007, in Singh & Qi 2013) argues that “defining and applying international-mindedness [...] might be used as a basis for formal learning”. In my experience it is helpful to provide a common definition for IMaGE that can be used by all stakeholders; given the potential future application for comparison between schools (or within a changing school over time), it may be necessary to have a stable reference for interpretation of the rubrics and questionnaires. In my EIC and RME assignments, I stated that “a school that promotes a high degree of IMaGE in its students demonstrates a commitment to international understanding and education for a sustainable future. Through its curriculum, faculty and learning opportunities it provides opportunities for its students to appreciate and engage with a wide range of perspectives, cultures and values. Its learners demonstrate compassion for others through their thoughts and actions, and are engaged in meaningful service and action with the wider world.” (Taylor 2013, 2015).
  13. 13. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 13! Though it captures some sense of internationalism in education, it is over-long, poorly- understood and so necessary to generate a more contemporary and concise definition for use in the tools that incorporates and reflects 21st century thinking and actions. To achieve this, I needed to unpack the separate terms of international mindedness (referring mostly to thoughts and dispositions) and global engagement (actions). Terms emphasised in bold in the coming paragraphs connect to the working definition of IMaGE presented at the end of this section. Where Haywood (2015, p79) asserts that that international education is a term that “may simply be too vague to be useful”, and Marshall (2015, p38) prefers the term global education, Hill (2013) advocates for education for international-mindedness. This focus, Haywood notes, “shifts the focus of attention towards the outcomes of education rather than on the process itself”, giving an observable set out outcomes that may be measured or evaluated and that “begs for definition in terms of specific criteria.” (Haywood 2007). Using the term international-mindedness recognizes the decades of work that have been done in the field by the IBO (Hill, 2015), with their Learner Profile giving an example of some observable outcomes of education (Haywood 2007), creating individuals “who recognize their common humanity and shared guardianship of the planet, help to create a better and more peaceful world.” (IBO 2009). As we dig deeper into research on international education, the term international-mindedness (IM) is commonly used, with Haywood (2015) noting that “there seems to be a prevailing perception that ‘we know what we mean, even if the definition is still under construction’”. Under construction it may be, yet there have been many attempts, with overlapping ideas and phrases. In 1991, Hett described internationally-minded individuals as those “possess an ecological world view, believe in the unity of humankind and the interdependence of humanity, support universal human rights, have loyalties that exist beyond borders and are futurists”, and in 1993 described global-mindedness as “a worldview in which one sees oneself as connected to the world community and feels a sense of responsibility for its members and reflects this commitment through demonstrated attitudes, beliefs, and behaviors” (Hett, 1993, p.143). Hett’s five dimensions of global-mindedness (responsibility,
  14. 14. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 14! cultural pluralism, efficacy, global centrism, and interconnectedness), were informative in developing the radials and elements of the web chart. The IBO stress that learners need to understand “their place in their local community as well as in an increasingly interdependent, globalized world” in a process that “starts with self-awareness and encompasses the individual and the local/national and cultural setting of the school as well as exploring the wider environment.” (IBO 2009), and defining an internationally-minded individual as one who “values the world as the broadest context for learning, develops conceptual understanding across a range of subjects and offers opportunities to inquire, act and reflect” (IBO 2012a). Common elements of IM definitions are found in much work, including a sense of ‘self’ and orientation in the world, inter-connectedness, intercultural understanding and responsibility for the future and others (Muller 2012; Skelton 2007; Skelton 2013; Hersey 2012; Duckworth, Levy & Levy 2005). Building on the mindset of IM, and the sense of responsibility and action that comes through some of the definitions so far, is global engagement (GE), defined by the IBO as a “commitment to address humanity’s greatest challenges in the classroom and beyond, [...] developing opportunities to engage with a wide range of locally, nationally and globally significant issues and ideas” (IBO 2012b). Engagement should be interpreted as the actions of an internationally-minded individual, and may give more of the ‘observable outcomes’ of education for IM mentioned by Haywood (2015, p80). A distinction should be made here between global engagement and global citizenship; terms that sound similar but are again a source of potential confusion (Marshall 2007). Global citizenship should be interpreted as the means through which a globally engaged individual takes action, “understood as a multidimensional construct that hinges on the interrelated dimensions of social responsibility, global competence, and global civic engagement” (Fig. 2, Morais & Ogden 2011). Global citizenship, therefore, reflects a set of skills and a fluid body of knowledge that could be built into a school’s curriculum (OXFAM 2016), a complex set of practices that has been included as its own radial in the web chart. Global competence bears highlighting in the final definition, making clear that to be successfully globally engaged, one must be armed with an appropriate set of skills and informed global knowledge (Morais & Ogden 2011).
  15. 15. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 15! Figure 2: Morais & Ogden’s (2011) Global Citizenship Model Interestingly former Director General of the IBO, George Walker, described international- mindedness as a ‘20th century’ notion of ‘otherness’ or of events occurring in ‘distant, exotic’ places; he preferred global-mindedness, with its suggestion of inter-connectedness, and with four components that reflect “21st Century realities[...]: emotional intelligence, communication, (inter)cultural understanding and collaboration” (Hill, 2000 & Walker, 2004 in Hersey 2012). Although this may be in keeping (perhaps unintentionally) with language around the globalization of international education (Cambridge 2002; Bates 2010; Bunnell 2008), I elected to use the compound term international-mindedness and global engagement (IMaGE). Other than the obvious neatness of the acronym and its suggestion of visualisation, its components can be easily recognized and interpreted, even if the reader is not familiar with the literature. Recognizing that technology and accessible international travel have removed many barriers to global communication, learning and action (Hill, 2000 & Walker, 2004 in (Hersey 2012), studies of IMaGE education offer new insights. We can see the shift from ‘international understanding’ as a construct of ‘us vs. them’, to more inclusive international-mindedness as an inter-personal (or
  16. 16. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 16! ecological, interconnected) state, which itself can branch of into related fields of cosmopolitanism, cultural intelligence and studies of common humanity (Singh & Qi 2013). Singh and Qi offer the following five ‘concepts of 21st century international-mindedness’, (summarized in parentheses by me), which are informative in part for defining IMaGE for this study, but more usefully for informing the construction of the rubrics: 1.! Planetary intellectual conversation (Transnational sharing, knowledge, development and discovery, including collaboration across disciplines and borders, perhaps facilitated by technology) 2.! Pedagogies of intellectual equality (between students from all nationalities, including practices such as differentiation, language support) 3.! Planetary education (a ‘we-humans’ (Bilewicz & Bilewicz 2012)) approach to global education, important in curriculum development and pedagogy) 4.! Post-monolingual language learning (moving beyond a second language into multilingual education, with some focus on cross-language transfer) 5.! Bringing forward non-Western knowledge (students learning from and with each other, across languages and with a ‘planetary’ curriculum) This more modern framework serves to emphasise idealistic ‘common humanity’ values of empathy-focused internationalism in education, bringing them to the fore, yet providing clarity on their pragmatic implications (after Cambridge & Thompson 2004). As it moves the definition of international education from a cosmopolitan (or overseas schools) notion of learning new languages in new places with new people, to something more holistic and shared (“planetary”,“transnational”), it also incorporates more modern educational ideas on intellectual equality, conceptual language learning and the global engagement through the use of technology. Having said this, Singh & Qi’s five concepts represent a lot of new – potentially confusing or threatening – vocabulary that might stand in the way of understanding of the users of the IMaGE tools. Consequently, though the rubrics include elements related to these concepts, the language used in my definitions of IMaGE is more recognizable.
  17. 17. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 17! Although it is impossible to capture decades of research in IMaGE education in a short definition my working definition of 21st Century IMaGE for use in the prototype tools is below, and is included on all survey tools for reference during the study: The thoughts and actions of an individual with an informed ecological worldview, based on intercultural understanding, global competence and empathy, resulting in purposeful, ethical and future-oriented engagement with issues of local, national and global importance. 2.2 Measuring international-mindedness in schools As long as we have been trying to define international-mindedness, we have been trying to measure it (Cambridge & Carthew 2015; Yemini 2012), yet as Yemini (2012) notes “although schools around the world invest increasing resources and attention to encouraging the development of an international mindset in students, almost no specific measures are currently available to assess the outcomes of these efforts”. With a diversity of purposes and methods, from the individual-focused measurement of perception and attitudes (Hett 1991; Hayden, Rancic & Thompson 2000; Yemini 2012; Singh & Qi 2013), to qualitative formative school-level self-studies (ISA 2011; Muller 2012), through to more high-stakes external evaluation (Fertig 2015; CIS 2016a; IBO 2014), the application of assessment of international-mindedness tends not to be in the comparative sense. Although multiple international (or global) mindedness scales exist (Singh & Qi 2013), these tend to focus on individual perceptions rather than the organizational actions that lead to those perceptions; for my purpose they are more useful in the development of the element descriptors than in the overarching model. For example, Hett’s Global-Mindedness Scale or Morais & Ogden’s 2011 Global Citizenship Scale might provide useful indicators for elements within the Students & Transition and Faculty & Pedagogy radials. For the remainder of this section, I will focus on measures of a school’s IMaGE (or internationalization) as a whole.
  18. 18. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 18! Common to measures of internationalization in education tend be the following classifications or standards (Beerkens et al. 2010, in Yemini 2012): 1.! “the purpose of the assessment (self-evaluation, classification or rating); 2.! the type of indicators to be measured (qualitative or quantitative); 3.! the dimensions to be measured (including institutional commitment and curriculum, among others); 4.! the structure to be used (survey, independent expert observations or internal data collection); and 5.! the method of indicator validation (comparisons over time or of different institutions).” (Beerkens et. al 2010) Regarding classifications 1 and 5, One of the potential applications of the IMaGE tools is to provide a reliable, comparative tools for visualizing a school’s evaluation of its IMaGE, either within the school over time or between schools (Taylor 2013, 2015). Through this study, I aim to develop my IMaGE tools to be complementary to key existing resources, so that users might apply them in their own context without an additional burden. Through the development of the evaluation rubrics, I aim for a mixture of quantitative and qualitative data (classification 2), collected across eight broad radials (dimensions) of internationalization of the school (classification 3). The structure of my study is based on self-study (classification 4), which allows schools to “speak for themselves”, identifying their own areas of strength and development (MacBeath 2005). This lower-anxiety model of evaluation (as opposed to a purely external ‘inspection’), involves more stakeholders in the school, generating buy- in into the outcomes (MacBeath 2005), and which may then be verified by external agencies such as the CIS or the IB. Along with their wider participation, they have a wide general audience, from school governance, leadership and faculty to students and (prospective) parents; one might find the outcomes of an evaluation on the school’s website or promotional materials, and there is an expectation that the results are shared with the community (CIS 2016a; IBO 2014). Clear, standards-based tools are a commonality in self-study of internationalization (Yemini 2012), though by the descriptive nature of a school’s IMaGE qualities these may tend towards
  19. 19. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 19! qualitative data collection (Yemini 2012). Beyond simple diversity statistics much quantitative data collected by the school reflect other ‘qualities’ (such as exam results, funding) that are not considered in this IMaGE study, the rubrics being an attempt to quantify the qualitative. Three commonly-used measures for international schools are the CIS’s (2016a) Criteria for the Evaluation & International Accreditation of Schools, the IBO’s (2014) Programme Standards and Practices and the International Schools Association’s (ISA 2011) Self-Study Guide for Internationalism in Education. Comparison of these measures informs the development of my own IMaGE tools. The ISA’s self-study tool, though lowest-stakes, offers the greatest flexibility of the three tools, declining to offer a definition of international and focusing on an iterative process of reflection. As a school defines, investigates and reflects on their own definition of international in education, they explore four self-study areas of the school: school values; curriculum and teaching practices; school communities and school management. The self-study guide breaks these areas down into sub- areas, offering guiding questions for investigation and suggestions for evidence – though deliberately shies away from statements of standards or quality. This approach might promote the greater diversity across international schools of the three tools, though is limited in that it does not offer an externally- validated accreditation. Furthermore, its openness to interpretation might be limited by the experience (or expertise) of the participants (Cambridge 2015), and would require strong guidance or institutional memory to maintain a sense of ‘standard’ over time as faculty changes. However, it offers an inviting approach and a with its guiding questions and suggestions is a strong resource for use in evaluation. Accreditation or authorization by recognized bodies, dependent on clear standards or descriptors, may increase a school’s hold in an increasingly crowded market (Fertig 2007, Cambridge 2002, Hayden 2011; Murphy 2012), yet the application of global standards may lead to schools that more homogeneous and less diverse (Fertig 2007). This poses an interesting problem for my study – though the aim is to develop reliable IMaGE tools, it is not to homogenise schools. Both the IB and CIS tools offer external accreditation and evaluation following a process of self-study, and now the organisations are working together to align visits in schools (CIS 2016a, IBO 2016), where the authorization of the IB programmes can be seen as a subset of the wider school accreditation of the
  20. 20. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 20! CIS. In both cases, the self-study requires a steering committee and diverse stakeholder representation, a process of evidence collection and presentation and the formation on an action plan for improvement. Where the CIS protocols encompass all aspects of the school across 9 domains (including physical and safety), with internationalism woven in through some standards of some domains, the IB’s protocols zoom in on philosophy, organization and curriculum (including teaching) practices, with frequent reference to international-mindedness and the learner profile. In both cases, there are standards that must be met prior to authorisation, with others allowed to be in progress. The overlap between the protocols, along with the potential to align visits, opens an opportunity for application of the IMaGE tools in a useful sense. Both organizations have a four-tier evaluation system, with the CIS protocols defined as (with increasing quality) ‘membership, preparatory, team visit and future aspiration’ stages. The IB’s protocol requires schools to develop their own descriptors for the four levels, though where a school rates themselves as a ‘1 (low)’ or a ‘4 (very high)’ they must be prepared to provide evidence (IBO 2014). A school working with both agencies might naturally align their descriptors, though this ambiguity again limits the potential for comparative or longitudinal studies, and is why I propose the use of a set of defined rubrics with descriptors that connect the agencies. To meet this end, the rubrics have four bands for ‘degree of quality’, plus ‘zero’ and ‘I don’t know’ bands, and their construction will be explained in chapter 3.5.1.
  21. 21. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 21! 2.3 Evolving the web-chart Since 2012, some development has taken place in the selection and description of the elements of the rubrics and the structure of the web chart. Initial anecdotal response to the web chart was positive, as an easily- interpreted, multi-faceted graphic (Tague 2005). The concentric design, with outward progression representing an increasing quality echoes Hill’s (2006) cultural model of interaction, as well as the Global Competence Aptitude ModelTM (Hunter, Singh & Habick, n.d., Fig 3), where the inner (green) circles represent internal readiness, the blue represent external readiness and which itself provides a useful toolkit for development of skills and attributes in the Global Citizenship Education radial. Figure 3: Global Competence Aptitude ModelTM , from http://www.globallycompetent.com/model In keeping with Nikel and Lowe’s (2010) model of the fabric of quality in education, I have maintained the positioning of the radials to try to communicate the tensions between radials – some have overlapping or complementary elements. For example, intercultural relationships are seen as important in the Students & Transition, Faculty & Pedagogy and Community & Culture radials. Building on the assertion that “a contextually relevant balance [...] does not imply equalization” (Nikel & Lowe 2010), I have attempted to maintain balance through the distribution of ideas in the radials and underlying elements. The Values & Ideology radial, with a high degree of commonality across other measures of internationalism (IB 2014; CIS 2016; ISA 2011), has five contributing elements, giving each a relatively high weighting in the development of the school’s IMaGE. Contrastingly, in the Students & Transition radial, with seven elements and more varied approaches to their teaching and measurement, each element therefore contributes slightly less to the overall IMaGE of the school. Table 1 below presents a comparison of the ‘big ideas’ of the foundational resources I have used, in an attempt to identify common themes and categories to inform the adjustment of the
  22. 22. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 22! radials and selection and description of the elements within. Each of these sets of standards is further unpacked to provide guidance or description, yet common themes are immediately apparent, with universal themes of values, leadership/governance, curriculum, teaching, pedagogies, global citizenship education and themes related to various stakeholder groups. Table 1: Comparing the ‘big ideas’ of measuring IMaGE Source Categories presented ISA (2011) 1: School Values .1! Values & Internationalism .2! School values and rules relating to respect for others .3! Commitment to values 2: Curriculum & Teaching Practices 2.1 Curriculum 2.2 Teaching practices 2.3 Curricular materials 3: School Communities 3.1 School and Local Community 3.2 School and Students 3.3 School and Family 3.4 School and Teachers 4. School Management 4.1 Governance 4.2 Management 4.3 Admissions Procedures 4.4 Public Relations 4.5 Facilities Terminology: Areas IB (2014) A: Philosophy B: Organization B1 – Leadership & Structure B2 – Resources & Support C: Curriculum C1 – Collaborative Planning C2 – Written Curriculum C3 – Teaching & Learning C4 - Assessment Terminology: Standards & Practices CIS (2016) A. Purpose & Direction B. Governance, Leadership & Ownership C. The Curriculum D. Teaching and Assessing for Learning E. The Students’ Learning and Well- Being F. Staffing G. Premises and Physical Accommodations H. Community and Home Partnerships I. Boarding/ Homestay/ Residential (where relevant) Terminology: Domains ACE (via Yemini 2012) 1. Articulated commitment; 2. Academic offerings; 3. Organizational infrastructure; 4. External funding; 5. Institutional investment in faculty; 6. International students and student programmes Terminology: Indicators Muller (2013) 1. School philosophy and values; 2. Governance and management practices; 3. International curriculum perspective; 4. Ethical practice in an international context; 5. Cultural composition of the school – internal and external community; 6. Third Culture Kids; 7. Linguistic fluency; 8. Service learning; 9. Teaching practice and professional development; 10. Student life. Terminology: Characteristics The implication of these comparisons is that the current prototype model of the web chart seeks to unite common factors and strong features. With the more low-stakes invitational, user-
  23. 23. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 23! friendly and open-ended ISA tools providing inspiration for the radials and many elements, the IB and CIS tools give structure and accountability, as well as supporting resources and definitions. Consequently, some adjustment has been made in the radials of the web (Fig. 4), Pedagogies have been moved from the Curriculum radial to the Faculty radial to better reflect the teacher- controlled influences on IMaGE (Ladson-Billings 1992, 1995a, 1995b; Snowball 2007; Hayden & Thompson 2011). Transitions has been added to the Students radial (and presented within the elements of the Community & Culture and Faculty & Pedagogies radials) to make explicit reference to the role of the importance of incoming and outgoing transitions in the development of IMaGE (Useem & Downie 1976; Fail et. al 2004; Fail 2007, 2011). Community has been added to the Culture radial to emphasise the importance of interactivity between various internal and external groups (Ladson-Billings 1992; Bennet 2014), and Ethical has been added to Action in light of the growing body of research on effective and sustainable school service (Berger-Kaye, 2007). Additionally, the scale of the web chart has been adjusted, from the original 1-5 to the current 0-4. This is serves two purposes: the first to avoid confusion with Likert-style 1-5 rating scales, and the second to find alignment with the 4-band rating scales of the ISA and IB tools protocols (with an additional ‘zero’ value for ‘no evidence’). Figure 4: Evolving the IMaGE web chart, including re-scaling and renaming some radials
  24. 24. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 24! These adjustment, in alignment with Singh & Qi’s (2013) five concepts of 21st century international-mindedness, will be used to populate and describe the elements of the radials for the IMaGE tool evaluation rubrics, and these will be discussed in the next section. 2.4 Influences on IMaGE: Web-chart radials and elements Although the main focus of this paper is to pilot-test the prototype IMaGE tools (web chart and rubrics), it is necessary to justify some of the choices made in the selection of radial elements and their descriptors, with the complete list of elements presented in table 2. Some key sources used in the development of the elements descriptors are presented under a few element columns of the prototype rubrics in Appendix 2, as an example of how the completed rubrics might present their pedigree. The standards outlined in the tools compared above offer much of the structure of the IMaGE tools, yet there has been significant supplementation from literature sources. Although it is not possible in the scope of this assignment to justify the choices of all four degrees of quality of all 49 identified elements, I will justify some important or counter-intuitive choices and concepts that cross radials, adding to the “tension in the fabric of quality” (Nikel & Lowe 2010). Structures from the ISA, CIS and IB are common in their description of elements of mission and core values, for example. Where Sylvester (1998) noted that “the greatest challenge […] facing multilateral international schools is one of worldview”, we have since seen the development – almost homogenization (Fertig 2007) – of international school missions. In keeping with Cambridge (2003) and Muller (2013), who discuss the internationalist and globalist missions of schools and the role of internationalization language in the foundational documents of schools, the ‘Mission & Vision’ and Core Values & Rules’ elements are identified. Internationalization is most likely a deliberate choice (Cambridge 2002), with adoption of international standards balancing the idealistic internationalist worldview mission of the school with the pragmatic realities of school business (Cambridge & Thompson 2014); the adoption of international programmes not only providing market legitimacy and brand recognition for the school (Murphy 2012), but also affording tools, resources and definitions for their development in line with values and ideology.
  25. 25. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 25! Multilingualism sits alongside intercultural understanding and global engagement in the IB’s conceptual framework for international-mindedness (IB 2013), yet its realisation reaches beyond the simple acquisition of a second (or third) language (Nieto 2009; Singh & Qi 2013). Multilinguals “experience the world differently […] because culturally important concepts and ways of thinking […] are embedded in language” (Duckworth, Levy & Levy 2005). Singh & Qi (2013) assert that an effectively multilingual learner can make connections between languages, using them as a “multimodal communication for international understanding”, leading to their classification of the school’s contribution as post-monolingual language learning. Multilingualism, therefore influences multiple radials and is clearly a shared responsibility of an international school, contributing to the elements of ‘Promotions & Publications’ and ‘Access Languages’ the Values & Ideology radial, ‘Multilingual Support’ in Faculty & Pedagogy radial, ‘Curriculum Offerings’, ‘Multilingualism in the Curriculum’ and ‘Communicating the Curriculum’ in the Core Curriculum radial and ‘Intercultural Engagement’ in the Ethical Action radial. Another wide-reaching concept is the perception of internationalism by groups and individuals, including ‘Leadership Communication’ and ‘Recruitment’ in the Leadership, Policy and Governance radial, ‘Students & Internationalism’ in the Students & Transition radial, ‘Faculty & Internationalism’ and ‘Professional Development’ in the Faculty & Pedagogy radial, ‘Community, Culture & Events’ in the Community & Culture radial, ‘Curriculum Frameworks’ and ‘Curriculum Content and Materials’ in the Core Curriculum radial, and ‘Values Education’ in the Global Citizenship Education radial. Where many attempts at measuring perceptions of IM have been made, the development of true IM as an extension of self may be riddled with challenges (Skelton 2015), perhaps harder still to accurately measure the impacts of the school on the student’s IMaGE (after Hayden, Rancic & Thompson 2000). However, tools such as the Global Mindedness Scale, the Global Citizenship Scale or the Global Competence Model might be put to use to measure perspectives of students, teachers (including applicants), leadership and governance, as well as to inform strategic and instructional planning.
  26. 26. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 26! Experience of transitions in culture, language and values that can be in turns enriching and culture-shocking (Useem & Downie 1976; Zilber 2009). As Morales (2015) notes, “the programs implemented, or lack thereof, are imperative to the successful transition of TCKs into their new, diverse, multicultural environment,”. This can be expanded to the community beyond the students, with the induction of new faculty crucial to development of their own international-mindedness as well as providing the ‘safe’ conditions for them to influence the IMaGE of their students (Fail 2011; Hersey 2012). Consequently, issues of incoming and outgoing transition are included in the ‘Support of Faculty’ in the Leadership, Policy & Governance radial, ‘Orientation & Integration’ and Outgoing Transition’ in the Students and Transition radial, ‘Professional Development’ in the Faculty and Pedagogy radial, and ‘Community Transition & Support’ in the Community & Culture radial. With transition comes change in interaction, another wide-reaching concept in the radials. Intercultural interactions shape relationships and the success of the organization (Hofstede 1986, 1994), yet understanding of intercultural communications may not come naturally (McCaig 1994) and might need to be explicitly taught or expected (Heyward 2000; Hayden 2015; Snowball 2015), leading the individuals and organization from monocultural and multicultural communication through the cross-cultural and intercultural to the transcultural (Van Hook 2005, 2011). A similar, more practical hierarchy is presented in Milton’s (2014) Developmental Model of Intercultural Sensitivity (DMIS), which I have used in descriptors relating to interactions. With ethno-centric interactions (the experience of one's own culture as "central to reality,"”) at lower degree of quality and ethno-relativist (“the experience of one's own and other cultures as "relative to context"”) at the higher, I am curious to see how this terminology is interpreted by the survey volunteers. With intercultural understanding and communication a central part of intercultural competence (Morais & Ogden 2011), effective interaction across cultures reaches many radials: ‘Leadership Communication’ and Recruitment’ in the Leadership, Policy & Governance radial, ‘Intercultural Relationships’ in the Students & Transition and Faculty & Pedagogy radials, ‘Local Community’ and ‘International Community’ in the Community & Culture radial, ‘Intercultural Relationships’ in the Global Citizenship Education radial and ‘Intercultural Engagement’ in the Ethical Action radial.
  27. 27. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 27! Where intercultural education requires a change in pedagogy (Ladson-Billings 1992, 1995a, 1995b; Duckworth, Levy & Levy 2005), recruitment and ongoing professional development of teachers requires significant attention (Cambridge 2002; Hayden 2015). With the growth of similarly- missioned schools, we see increasing competition for internationalized teachers, leading perhaps to a “war for talent” (Lauder 2007) and a “segmented labour market” in which “disproportionately large” sections of faculty come from ‘trusted’ nationalities (Canterford 2003). The implications for the IMaGE the school may lead towards further homogenization (Fertig 2007), and so there is a need for purposeful, effective professional development (Hayden 2015), framed on pedagogies of “intellectual equality” (Singh & Qi 2013). Useful here is Snowball’s model of professional development (2015), with seven key domains reflected throughout the elements of the Faculty & Pedagogy radial and iterated through many of the remaining radials. 1! “International Education in Context 2! Teaching in Multilingual Classrooms 3! Multiculturalism 4! Student characteristics and learning 5! Transition 6! Internationalising curricula 7! The reflective international teacher” (Snowball 2015) As we see from Snowball’s domains, the teacher plays an important role in internationalising the curriculum (Wylie 2008; Cambridge 2011), with an internationalized curriculum becoming truly planetary (Singh & Qi 2013), based on diverse curricular materials and resources, open to multilingual access and making use of educational technologies to learn across boundaries and borders (Singh & Qi 2013, Lindsay & Davis 2012). Although the Core Curriculum radial includes elements that seek to internationalize learning, the purpose of the radial is to emphasises the ‘nuts and bolts’ of a recognizable, standardized education - the product of which might be measured in traditional means such as exams or standards-
  28. 28. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 28! based assessment. Global Citizenship Education, though interacting with the Core Curriculum, has been given its own radial, bringing together some of the ideas discussed above (intercultural interaction, values education, educational technology), as well as making explicit the CIS’s standards of ethics, diversity, global issues, communication, service and leadership (CIS 2016) and the practices of global competence and global engagement outlined in OXFAM’s guides (2016). The concepts of the first seven radials lead towards the outcomes of Ethical Action, in the final radial. Combining elements of programmes, intercultural engagement and ethics with elements of Student Agency, Sustainability and Efficacy, derived from the works of Berger-Kaye (2004, 2007, 2009), the descriptors offer a set of objectives (Woolf 2005, in Muller 2012) that contribute to the development of “global competence, cross-cultural communication, enhancing mutual understanding, personal growth etc.” (Muller 2012). These elements also connect to the IB’s Middle Years Programme Service Learning Outcomes, themselves heavily guided by Berger-Kaye’s work (IBO 2013), and in which Service is defined as a subset of Action. In conclusion, although the IMaGE tools are in a prototype form and subject to critique and evaluation, I have sought to connect diverse concepts in international education and metrics of internationalization. Though each radial may in itself feed a dissertation, these prototype elements (shown below in table 2) and descriptors (presented in full in Appendix 2), offer a starting point for the IMaGE of our international school.
  29. 29. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 29! Table 2: Identifying the Radials and Elements of the IMaGE Web Chart Radial 1: Values & Ideology Radial 2 Leadership, Policy & Governance ●! Mission & Vision ●! Core Values & Rules ●! Accreditation ●! Promotions & Publications ●! Access Languages ●! Board representation ●! Board policies ●! Leadership diversity ●! Leadership communication ●! School policies ●! Recruitment ●! Support of Faculty Radial 3: Students & Transition Radial 4: Faculty & Pedagogy ●! Student Diversity ●! Students & Internationalism ●! Intercultural Relationships ●! Admissions & Promotions ●! Orientation & Integration ●! Outgoing transition ●! Faculty Diversity ●! Faculty & Internationalism ●! Intercultural Relationships ●! PD ●! Faculty & Curriculum ●! Pedagogies ●! Multilingual Support Radial 5: Community & Culture Radial 6: Core Curriculum ●! Local Community ●! Host Culture Influence ●! International Community ●! Participation in Global Community ●! Community Transition Support ●! Community, Culture & Events ●! Curriculum Frameworks ●! Curriculum Content & Materials ●! Curriculum Offerings ●! Multilingualism in the Curriculum ●! Communicating the Curriculum ●! Educational Technologies Radial 7: Global Citizenship Education Radial 8: Ethical Action ●! Values Education ●! Thinking & Learning Skills ●! Global Engagement ●! Global Centrism ●! Intercultural Relationships ●! Educational Technology ●! School Programmes ●! Student Agency ●! Sustainability ●! Intercultural Engagement ●! Ethics ●! Efficacy
  30. 30. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 30! Chapter 3: Research Design 3.1 Timeline This research began in 2012, with my EIC assignment and a basic outline for the research plan was developed in 2015 through the RME unit. However, the bulk of the research, including the literature review, survey design, web-chart development, rubric creation (and pre-pilot), testing and evaluation was carried out between October 2015 and June 2016, with the literature review building on the EIC and RME work over the winter and spring. Pilot-testing of the web and rubrics took place across two weeks in May 2016, giving sufficient time to process the data and seek feedback from volunteers before the end of the academic year in June, and thus the loss of access to volunteers. 3.2 Research Strategy As noted in my RME work (Taylor 2015), this pilot test, based on the collection of empirical data, leans towards a positivist perspective on educational research (Cohen et al. 2013, p7), though the application of what might be considered an inherently subjectivist set of value descriptors in the rubrics generates an ontological problem: how can I generate valid and reliable empirical, comparative data founded on a basis of participant perceptions? My proposed strategy, developed further through this research, takes a pragmatic, multi-level, quantitative-dominant mixed-methods approach (Johnson et al. 2007), that will allow me to recognize “four dimensions” of the study: “research question (what?), purpose (why), process (how?) and potential (scope of results)” (McLafferty & Onwuegbuzie 2006) in (Johnson et al. 2007). I identify the strategy as quantitative-dominant as the greater weight is placed on the empirical data generated by the study, with the target outcome, a web-chart of our school, itself the product of statistical processing. Qualitative methods add value and insight to the results yielded for the generation of the IMaGE, and for the evaluation of the rubrics used. The multilevel identification signifies the integrated yet varied use of data types at various stages of the study (Cohen et al. 2013), with qualitative descriptions informing the generation of quantitative rubrics, quantitative data
  31. 31. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 31! emerging from the application of the rubrics and evaluation questionnaires and a return to qualitative data (written and oral) in clarifying and expanding on volunteer responses. This pragmatic, “methodologically pluralist approach to research” (Cohen et al. 2013) allows for flexibility and a ‘practice-driven’ investigation; the naturally social, inter-disciplinary field of education lending itself to ‘communities of practice’ in research (Denscombe 2008). Furthermore, according to Johnson et al (2007), “the mixed methods research paradigm offers an important approach for generating important research questions and providing warranted answers to those questions”; a sentiment of particular importance in this study that seeks to pilot-test prototype evaluation rubrics, a process which may in itself lead to unpredicted results or the generation of new research questions. Pragmatic mixed-methods allows triangulation of findings between the qualitative and quantitative and a methodological triangulation (Cohen et al. 2013) between the results yielded in this study between the application of the tool and its evaluation; an attempt to increase reliability and “a powerful way of demonstrating concurrent validity” (Campbell & Fiske 1959 in Cohen et al. 2013). As a result, the quantitative and qualitative components of a study become “mutually illuminating” (Bryman 2007), in Cohen, 2013, p24). As a scientist and science teacher, I am comfortable with the use of statistical analysis and comparison, and will tend towards graphical and numeric presentations of data, whilst considering issues of validity and reliability and recognising the importance of qualitative observations in interpreting empirical outcomes. In my role as Director of Learning for the school, I am the professional learning leader, seeking to support the development of ‘communities of practice’ as described by Denscombe (2008); through choosing a mixed-methods strategy for a project that generates a whole-school case-study, I hope to be able to demonstrate an example (albeit elementary) of a “mutual collaboration linked to a key research problem” (Denscombe 2008). Finally, as noted in my RME paper, as this pilot study seeks to test and evaluate a tool of my own creation: I am highly aware of the potential for personal bias to slip into its evaluation (Denscombe 2014), though
  32. 32. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 32! recognize that even with triangulation through mixed-methods data collection, I might not be successful in removing all bias (Fielding et al. 1986, in Cohen et al. 2013). 3.3 Research Design The pragmatic, multi-level, quantitative-dominant mixed-methods strategy justified above has two main purposes and four components. The primary purpose, a pilot-test of the IMaGE tools, is supported by the secondary purpose, a small-scale case-study of our school. The components of the research include the design of the rubrics and survey tools, initial engagement with volunteers through a preliminary ‘pre-test’ survey, a small-scale case-study of the IMaGE of our school based on the application of the prototype rubrics, and finally a follow-up evaluation of the web-chart and rubrics using a questionnaire and open-ended responses. The literature review, in Chapter 2, was used to refine the model of the web-chart and select the eight radials, identify their contributing elements and then to generate 0-4 scale descriptors for the degree of quality potentially observable by stakeholders. Due to the limitations of scale of a dissertation, the broad scope of the web idea and the prototype nature of the aim of the study, this was an early attempt to align qualitative indicators of the IMaGE of a school with an empirical quantitative assessment method. In this section, I’ll justify the pilot and case-study design of the research, before expanding the design of individuals tools and procedures in section 3.5. Piloting is a crucial element of the development of survey tools in educational research, and the primary purpose of this study; the outcomes of this pilot should help evaluate “the reliability, validity and practicability” of the IMaGE tools (Cohen et al. 2013). To this end, the feedback questionnaire has been designed to collect targeted feedback on the following factors identified by Cohen et. al. (2013, p402): ●! Clarity, construction and layout ●! Feedback on: validity, operability, clarity, readability for the target audience(s), layout, numbering, formatting
  33. 33. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 33! ●! Identify omissions and redundancies ●! Identify items that are too easy/difficult or too nuanced ●! Identify commonly misunderstood or problematic items ●! Evaluate time requirements (reasonable burden) ●! Identify how motivating/non-motivating the tool might be for various audiences Consequently, the small-scale case-study provides a part of the overall pilot-test of the IMaGE tools, where the outcomes of the pilot will inform the further evaluation and development of the survey (Cohen et al. 2013). Defined by boundaries of time (2016, between accreditation visits), location (Japan) and institution (a three-programme IB World school in Japan), this case-study forms part of a “step to action” for future development (Cohen et al. 2013). This embedded, single-case design (Yin 2009) contributes to the wider pilot-test of the IMaGE tools, but the small-scale nature and sample size of the pilot limits the generalizability of the outcomes (Cohen et al. 2013). Though I intend to strengthen its construct validity through the pilot process, its ecological validity may be limited to providing clues for further investigation within our context (Cohen et al. 2013); the school is acting more as a “real-life context” (Yin 2009) for the pilot, rather than the pilot giving rise to a comprehensive, empirical view of the school. As volunteers participate in the case-study, they provide valuable ongoing feedback for the pilot of the tool: they will “describe, illustrate and enlighten” its development (Yin 2009 in Cohen et al. 2013). 3.4 Participants (Research Volunteers) As outlined in my RME paper, I will use a non-random, convenience-based participation model, using a purposive selection of colleagues (Cohen et al. 2013). Where I had originally planned a sample of secondary teaching colleagues to evaluate the school’s IMaGE, satisfying a 95% confidence level with a 5% confidence limit, this would have required a sample size of 40/45 colleagues (calculated with www.surveysystem.com, (Cohen et al. 2013). I had also considered
  34. 34. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 34! student and parent random population samples, to allow sub-group analysis. However, due to the pilot nature of this project, it seemed inappropriate to generate a bureaucratic burden on so many people; this might be better suited to wider field-test of the IMaGE tools once if they are deemed useful, reliable and operable, and if this study concludes that there is a useful purpose to the project. Consequently, my plan for this research shifted to a smaller group of ten participants (‘volunteers’), purposively sampled in order to gain a cross-section of the school, including teachers from kindergarten to grade 12, encompassing all three of the school’s IB programmes, a diversity of teaching disciplines, administrative leadership, a counselor, student support, teacher-parents and (within this sample), colleagues with a range of tenures at the school, from one to fifteen years. The goal with selecting this sample was to provide a focus group whose collected perspectives encompassed the contents of the radials, allowing for some analysis of individual versus group evaluation of the IMaGE of the school. A final purposive element to the survey was to select participants who have a strong understanding of two or more of the following: curriculum development, international education and the historical context and development of the school. This was to aim to ensure greater validity in their responses and, by extension, avoid wasting the time of struggling colleagues, or those whose perceptions may not be representative of the school as a whole. Participants were recruited through an all-faculty email invitation, to which the ethics and voluntary informed consent form was attached (appendix 1). I did not initiate any ‘in person’ attempts to recruit volunteers, though did respond enthusiastically to any colleagues who raised the topic in conversation. This approach reduced personal pressure on faculty, giving them an opportunity to find out about the study and its level of commitment before choosing to volunteer their time (Cohen et al. 2013). Once volunteers were identified, I offered the choice of attending a range of data collection time-slots and provided light refreshments and work in group setting (though with private responses) to ensure comfort and access to support or clarification, and to reduce stress.
  35. 35. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 35! 3.5 Methods and Procedures of Data Collection In a departure from the planned internet-based survey of my RME paper (Taylor 2015), I conducted this survey using personal paper-based questionnaires, rubrics and discussions; online surveys more at risk to drop-outs (Denscombe 2009, in Cohen et. al., 2013, p261). Having already selected the ten volunteers, avoiding ‘unit non-response’ and ‘item non-response’ issues (Durrant 2009), using supervision instead of Durrant’s suggested imputation (inappropriate due to small sample size). Through in-person support in a focus-group setting, I sought to avoid the following contributors to non-completion (or rush-responses)(Cohen et al. 2013): ●! Answer refusal due to misunderstanding, typological errors, confusion or other survey-generated problems ●! Item non-response due to missed instructions, pages or mis-read items ●! Time pressure of competing demands for respondent time ●! False responses due to ignorance or misunderstanding of the topic The survey consisted of three components: a preliminary pre-test questionnaire, the comprehensive prototype rubrics of the IMaGE radials and a follow-up evaluative questionnaire. I will outline these below, beginning with the rubrics, as they are the foundation for the pre-test questionnaire, and the basis for evaluation in the pilot. These three tools have the following in common, in order to ensure a high completion rate and to reduce distractors (Cohen et al. 2013): ●! Making the surveys easy to read and complete (as far as possible) ●! Making instructions very clear (in written and oral form) ●! Giving advanced notice and protected time for the survey ●! Cutting the survey to include only essential questions and sections ●! A simple linear structure that uses recognizable survey formats ●! Limiting the number of open-ended responses and placing them at the end of survey sections
  36. 36. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 36! 3.5.1 Rubric Design The use of a rubric allows for higher operability and accessibility to multiple stakeholders than surface-level rating scales, and carefully-designed rubrics may increase validity and reliability (Jonsson & Svingby 2007). Rubrics allow for clear identification of levels of mastery (Allen & Tanner 2006), or in the case of this study, degrees of quality. They can be broken down into clearly- defined subsections (such as a subject standards or skills, or in this case the elements of each radial), and their application can provide reliable, growth-focused evaluation that is accessible to diverse users (Jonsson & Svingby 2007). Figure 5: Example prototype rubric for a single radial of the web chart. Full rubrics are included in Appendix 2. Whereas in the RME paper I proposed a 1-5 model for the rubric, here I used a 0-4 scale. Where 1-5 might suggest some degree of active quality at all levels, the 0-4 scale includes ‘not at all or no evidence’ plus four levels of quality. This fits into a general 4-level rubric model, and descriptors are written to ensure a clear distinction between levels, which can be supported by evidence (Allen & Tanner 2006; Jonsson & Svingby 2007). In addition to the four bands representing
  37. 37. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 37! four increasing degrees of active quality plus a ‘zero’ band, I included an ‘I don’t know’ band; the important distinction being that ‘zero’ denotes a known lack of quality, whereas ‘I don’t know’ identifies a lack of knowledge on the part of the volunteer. The meanings of the bands of the rubrics are summarized in table 3 below, and these generic definitions were used as a reference for writing the descriptors for each of the elements. Table 3: General descriptors for ‘degrees of quality’ on the radial evaluation rubrics Degree of quality General descriptor 4 / Very high Aligning with ‘exceeds expectations’ or a very high standard of implementation as might be expected by a high-performing international school with an exemplary IMaGE programme. 3 / High Aligning with ‘meets expectations’ or a high standard of implementation as might be expected by a well-performing international school. 2 / Moderate Aligning with ‘approaches expectations’ or a moderate standard of implementation of IMaGE practices. 1 / Low Aligning with ‘does not meet expectations’ or a low (or tokenistic) implementation of IMaGE practices. 0 / No evidence The school cannot provide evidence for this element. ‘I don’t know’ The respondent does not know of the school’s degree of implementation. After identifying the degree of quality of each element in the radial rubric, volunteers applied a ‘best-fit’ rating for the radial as a whole, making a balanced, holistic judgement for the radial, where the relative weights of each elements may or may not be seen as equal. It also allowed for ‘I don’t know’ ratings to be discounted from the overall rating for the radial; a lack of knowledge of one element will not affect the degree of quality of others. This is preferable to an averaging that assumes equal weighting, and results in an integer rating on the individual level. It also allows for data to be collected at the individual and population level, and for data to be compared at the radial and elementary levels. Figure 6 demonstrates two examples of the best-fit rating in application.
  38. 38. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 38! Figure 6: Example applications of the best-fit approach to the rubrics Elements of the radial (Respondent 1) Elements of the radial (Respondent 2) Rating A B C D A B C D E 4 X 3 X X X X 2 X X 1 0 X IDK X Notes Best fit: 3 Note the lack of knowledge of element C does not affect overall rating. Best fit: 2 The ‘0’ rating here is based on the respondent’s knowledge of the school’s (lack of) quality in this element, and so swings the overall radial rating between 3 and 2. In terms of operability of this method, as teachers in an IB World School, the volunteers are accustomed to using a four-band evaluation rubric, the foundation of much assessment in the standards-based middle-years and diploma programmes. Furthermore, our teacher evaluation model uses a four-band rubric, whereas completion of the IB self-study process requires the development and application of four levels of implementation of the standards and practices, with the highest rating requiring ‘extraordinary evidence’ (IB cite). This may, in turn, feed into school self-study efforts for accreditation without duplicating effort. 3.5.2 Preliminary Questionnaires The purpose of the preliminary questionnaire was twofold: to help ‘tune in’ the volunteers to thinking about the school’s IMaGE and to provide a baseline assessment of volunteer perceptions of the school’s IMaGE. I was interested here in the relationship between volunteers’ preconceptions of the IMaGE versus their responses when applying the rubrics. In keeping with the principles outlined above in (Cohen et al. 2013), this initial questionnaire aims to be brief, clear and easily operable. Volunteers complete this after the project is explained to them and they have read and completed the ethics and voluntary consent form. The purpose of the study was restated at the top of the linear survey, along with the working definition of IMaGE for easy reference. Each radial was clearly
  39. 39. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 39! identified and a ‘to what extent…?” question posed, against which the volunteer makes a 0-4 scale judgement based on their perception of the school. The 0-4 scale descriptors match those in the rubrics. Figure 5 shows the introductory excerpt from the questionnaire, with the full tool presented in Appendix 1. The preliminary questionnaire concluded with the option for the volunteer of plotting their own web-chart based on their responses, with the intention of further familiarizing the volunteer with the web-chart model. Upon completion, the responses were collected and tabulated, and the rubrics given to the volunteer. Figure 5: excerpt from the preliminary questionnaire. 3.5.3 Evaluation Questionnaires This final questionnaire was used to capture evaluative data on the web-chart (including its usefulness and clarity), the content of the radials and the clarity and usefulness of the rubrics. For this purpose, the rating tool switched into a 1-5 Likert-style response format, founded on agree/disagree statements with an assumed interval scale (Carifio & Perla 2007), combining descriptors with numerical values to suggest equality of intervals, and position the strongest-positive response to the ‘natural’ ‘5’ rating (Cohen et al. 2013). This signalled a shift in purposes to the respondent, using a commonplace format that is recognizable to questionnaire-users (Boone & Boone 2012). The questionnaire kept to a simple, linear format, avoiding distracting early open-ended responses (Cohen et al. 2013), though I provide opportunities for qualitative justification after each section, recognizing
  40. 40. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 40! that “scale items are not autonomous and independent (...), but rather they are a structured and reasoned whole” (Carifio & Perla 2007). For the same reason, I had to bear this in mind when analysing the yielded data - although I generated means, modes and standard deviations of responses (Boone & Boone 2012), I could not reliably (and particularly with a small sample population), use any individual point as the basis of conclusion. Although interpretation of Likert-style scale data can be minefield, “questions are the nub” (Carifio & Perla 2007), and through the composition of clear, standard-format response questions, along with in-person support and clarification and the collection of qualitative commentary, I aimed to ensure the validity of the evaluative data collected. Figure 6 below shows an excerpt from the questionnaire, with the full questionnaire presented in Appendix 3. Figure 6: Excerpt from the evaluation questionnaire
  41. 41. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 41! 3.5.4 Data Handling All data were recorded initially on paper copies of the questionnaires and rubrics, encoded (volunteer 1, 2, etc.) so that they could be aligned and compared where needed. Space was provided on the rubrics for notes or queries as qualitative data, and once the eight rubrics and sets of questionnaires were completed, they were collected for data transcription into Microsoft Excel; responses tabulated for all elements, radials and questionnaire components. Qualitative comments (either written or discussed) were recorded accurately and presented under the relevant tables in Appendices 3 and 4, where they were coded in alignment with the rubric elements or questionnaire sections to which they refer. I used quotation marks for direct transcriptions, with phrases in parentheses indicating my own interpretation of the comment or clarification provided to the volunteer in the discussion. 3.6 Validity and Reliability Validity As noted in my RME paper, “threats to validity and reliability can never be erased completely” (Cohen et al. 2013), with this multi-layered mixed-methods approach there are some key potential sources of error that had to be addressed. This is complicated somewhat by the pilot nature of the research: through application and evaluation of the tools I aim to determine their validity and reliability, yet need valid and reliable data in the process. Reliance on a quantitative-dominant strategy requires strong construct validity in order to allow for statistical analysis of empirical data (Fraenkel et al. 2012). To this end, I used low-interference descriptors (Cohen et al. 2013), and context-appropriate terminology, avoiding confusion. This clarity was evaluated through the evaluation questionnaire. The development of these prototype descriptors was based on a literature review (Chapter 2), that sought to ensure theoretical validity of the concepts investigated - with the wide-ranging nature of the radials and elements contributing to the content validity of the tools in
  42. 42. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 42! reference to IMaGE (Cohen et al. 2013). Triangulation of the pre-test, best-fit and evaluative data aimed for internal validity to increase reliability of any conclusions made (Cohen et al. 2013). The limits of external validity in this study are bound by the extent to which the volunteers’ collected responses truly capture the IMaGE of the school, and as a pilot-test I expect this to be somewhat narrow (Cohen et al. 2013). Similarly, although this project might ultimately describe characteristics of all international school, the ecological validity of the results was limited to our own school context (Cohen et al. 2013). Reliability The mixed-methods nature of the research required multiple safeguards for reliability (Taylor 2015). To ensure the stability of quantitative data I used an appropriate sample size (n=10), and application of a two-tailed, paired T-test to compare population means, along with standard deviation deviations of all questionnaire and rubric items and the identification of modal responses in the evaluation questionnaire. Although the sample size of ten participants was smaller than the planned 40 of the original, internet-based survey method, I adjusted for this with a focused protocol, careful volunteer selection and measures to reduce non-response (or false response). Comparison of the pre- test and best-fit data provide some test of reliability as equivalence, and the significance of differences between these means may indicate areas for analysis and evaluation (Cohen et al. 2013). With a 0-4 scale for the pre-test questionnaire and 1-5 scale for the evaluation questionnaire, I will identify standard deviations of 0.8 or greater as ‘large’ (where n=10), and investigate accordingly. In viewing reliability in these qualitative data through the lenses of credibility, confirmability, applicability and transferability (Cohen et al. 2013), I narrowed the scope of qualitative data collection from the proposal in my RME assignment from an interview with structured and open- ended questions following an online survey (Taylor 2015), to shorter guided-response comment boxes on the evaluative questionnaire. With in-person support and the option to annotate the rubrics and seek clarification on any elements of the survey, this avoided problems of item non-response (Cohen et al. 2013). Finally, triangulation remained the foundation of reliability and validity in this survey,
  43. 43. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 43! with personal access to volunteers, focused questionnaires and rubrics and clear prompts for qualitative responses working together to ensure clarity of understanding of yielded data (Cohen et al. 2013). 3.7 Methods of Data Analysis All raw and processed data are presented in Appendix 3. Using Excel formulas, the following transformations and analyses were completed. Mean average values (n=10) for all rating-based questions, pre-test and best-fit data, to show the central tendency of the data, for comparison, using the formulas “=AVERAGE”. Each mean is accompanied by the standards deviation (normal distribution), to indicate the variability (spread) of the data, using the formula “=STDEV”. With a sample size of 10 volunteers and a scales of 0-4 or 1- 5, I set a standard deviation of 0.8 or greater as being ‘high’, and worthy of further investigation, though the purpose of standard deviation is also comparative and allows one to observe the relative spread of data between population means (Cohen et. al. 2007, p514). Additionally, in the evaluative questionnaire the modes were identified, to avoid issues of the standard deviation being influences by outliers “exerting a disproportional effect” on the mean and standard deviation (Cohen et al. 2013, p514). Pre-test and best-fit rating data for each of the radials were compared at the 95% confidence level through a two-tailed, type II (normal distribution) T-test using the formula, “=TTEST”, selecting raw data and P-value of 0.05, with any T-value below this critical value determined as rejecting the null hypothesis (no significant difference) and thus concluding a significant difference between means (Cohen et al. 2013, p515-517). Although the small sample size and relatively-limited range of potential responses (with large potential overlap) inhibit the potential for discovery of significant differences, this gave some indicator for evaluation of radials of concern in the pilot test. Population means for pre-test and best-fit data were used to generate the IMaGE of the school, using the Excel ‘radar chart’ graphical presentation. For clarity of comparison, pre-test data were presented in a lighter, dotted-line and the best-fit (more reliable) outcomes overlaid with a solid
  44. 44. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 44! line and darker shading. A setting of 25% transparency allows for the two sets of data to be seen clearly simultaneously (Tague 2005). 3.8 Ethical Issues Care was taken to adhere to guidelines provided by the British Educational Research Association (BERA 2011), with approval given on the Bath MA Education departmental ethics form before survey participants were sought. Explicit reference to BERA guidelines was made on the study participation: voluntary informed consent form (Appendix 1), explained in-person to the study participants before proceeding with data collection. Volunteers were active participants (BERA 2011), with the voluntary informed consent form, “call for volunteers’ email and introductory session all explicitly clarifying the process and necessity of their participation and audience of reporting of the pilot study (BERA 2011). As the pilot study was conducted in an international school in Japan, care was taken to ensure that communication was inclusive, culturally sensitive and undertaken with consent of the Head of School, whilst remaining within guidelines for British educational research ((BERA 2011). Volunteers were informed of their right to withdraw ((BERA 2011). Only voluntary adults were used in the pilot study, ensuring that no “children, vulnerable young people vulnerable adults” ((BERA 2011) could be affected by participation. No significant incentives were offered to participants, other than light refreshment for the duration of their survey session ((BERA 2011). The privacy and confidentiality of respondents and their data was treated seriously ((BERA 2011). In order to protect respondents’ time, reduce “bureaucratic burden” and facilitate support and feedback in using the pilot materials, survey responses were collected in two group sessions, though the privacy of individuals’ responses was protected. Individual responses were anonymised in data processing to ensure that responses provided by participants could not be traced back to individuals through data presentation or statements made in this report. The confidentiality of respondents’ data was protected with regard to reporting outcomes to school leadership and other stakeholders in this pilot study, to ensure no potential personal or professional risk to volunteers. Respondent data was
  45. 45. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 45! kept on file in hard copy, and volunteers’ own data are available upon request by the volunteer. Additionally, volunteers will be provided copies of the final dissertation and the outcomes of their own responses in the form of a web chart for personal reference. 3.9 Conclusion to Chapter 3 In summary, the pragmatic, multi-level, quantitative-dominant mixed-methods strategy outlined here generated data for the primary purpose (pilot-testing the IMaGE tools) and secondary purpose (small-scale case-study) of the research. Using a non-random, convenience-based participation model, using a purposive selection of ten colleagues (‘volunteers’) as a representative cross-section of the school’s faculty, a mixture of quantitative and qualitative data was collected. A summary of the stages of the research is outlined in table 4 below. Table 4: A summary of the IMaGE tools Tool Description Purpose Analysis Preliminary questionnaire 0-4 ratings based on guiding questions Tuning in to the web-chart idea, radials and guiding questions Preliminary perception data Radial rubrics 0-4 best-fit ratings based on in-depth evaluation of school practices Apply rubrics to the school, generate best-fit for production of the IMaGE Mean, standard deviation, IMaGE web chart. Identify school (and tool) strengths, areas for growth and Post-survey evaluation questionnaire 1-5 Likert-style questionnaire Evaluate the IMaGE tools for purpose, clarity, improvement etc. Mean, Mode, STDEV Web Chart (IMaGE) Radar-chart graphical presentation of mean ratings, pre-test and best-fit for each radial The ‘IMaGE’ of the school, and attempt to capture a visual definition of the school’s internationalism. Observe overall strengths, weaknesses and differences for evaluation of the school.
  46. 46. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 46! Chapter 4: Data & Analysis This chapter presents summaries of collected data and analyse the findings of the pilot study, beginning with the web-chart of IMaGE generated by the responses from the ten volunteers. I then unpack the survey results by radial, highlighting areas of interest and including important qualitative observations. Analysis of findings are communicated in chapter 5.3. Chapter 5.4 summarises data regarding the evaluation of the web chart and evaluation rubrics. Broader implications of the study, evaluation of research design and suggestions for future research will be communicated in chapter 6. Complete data tables are presented in Appendix 5. 4.1 The IMaGE of our international school Figure 7 shows the results of the pilot study (n=10). The underlying grey-shaded polygon shows the initial survey results, generated before respondents had engaged with the rubrics. Superimposed on top of this, in blue, are the results generated after applying the evaluative descriptors to each of the elements of each radial, after which the respondent then selected a ‘best fit’ integer value for the radial. Both the pre-test and best-fit values used to create this web-chart (IMaGE) were generated using mean averages of the ten responses. Initial observation of the chart suggests general agreement between the pre-test and best-fit data, with the school being perceived as demonstrating an overall moderate-high degree of quality in terms of promoting IMaGE. !
  47. 47. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 47! Figure 7: The IMaGE of an international school in Japan, based on the pilot-test of the radial evaluation rubrics (n=10). Comparing the pre-test and best-fit results using a t-test (P<0.05) found no significant difference in in any of the eight radials. The differences between pre-test and best-fit data for the radials of community & culture and core curriculum were close to significant, at P=0.062 and P=0.081 respectively.
  48. 48. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 48! Table 5: Comparing pre- and post-rubric data Pre-test rating for the radial Overall Radial Best Fit Rating t-test (P=0.05) Difference P-value Significant? Radial 1: Values & Ideology Mean 2.9 3 -0.1 0.777 No STDev 0.9 0.7 Radial 2: Leadership, Policy & Governance Mean 2.7 2.3 0.4 0.202 No STDev 0.8 0.5 Radial 3: Students & Transition Mean 2 2.4 -0.4 0.151 No STDev 0.7 0.5 Radial 4: Faculty & Pedagogy Mean 2.3 2.4 -0.1 0.660 No STDev 0.5 0.5 Radial 5: Community & Culture Mean 2.2 2.7 -0.5 0.062 No (nearly) STDev 0.6 0.5 Radial 6: Core Curriculum Mean 2.7 2.3 0.4 0.081 No STDev 0.5 0.5 Radial 7: Global Citizenship Education Mean 2.4 2.5 -0.1 0.754 No STDev 0.7 0.7 Radial 8: Ethical Action Mean 2.3 2.7 -0.4 0.145 No STDev 0.5 0.7 One immediate analysis of the data suggests that the taking the mean average of the data gathered by individuals working in isolation might lead to a ‘muted’ outcome in terms of ratings; where participants were perhaps unsure of a descriptor, but not ignorant enough to state ‘don’t know’, they would tend towards more neutral responses. This, along with the narrow 0-4 ratings scale, may have made it unlikely for any significant differences to be observed at the P<0.05 level, despite some respondents’ pre-test and best-fit data appearing different. Some survey respondents communicated surprise at their outcomes, suggesting that they expected the school to rate more highly; another commented that by thinking about the radials they were forced to think more critically about the
  49. 49. ! Stephen Taylor (sjytlr) University of Bath MA Education (International Education) ! ! 49! school and thus their ratings were lower than they expected to give. These observations, among others, will be discussed later. 4.2 Unpacking the radials The generally muted IMaGE that emerges from the survey suggests a need to examine the contributing data more closely. The tables below present summaries of collected data (n=10), including mean average ratings for pre-test and best-fit evaluations of the school’s IMaGE, along with the outcome of a t-test for significance in the difference between those means. Mean ratings for each of the individual elements of the radial are included, with standard deviations are also presented as an indicator of the variability of data. No significant difference was found at the P<0.05 level in any pre- test vs best-fit T-test. Summaries of pertinent qualitative observations, including participant annotations on the rubric (or questions or comments in person) are included with the results of each radial. Complete data are presented in Appendix 5. Table 6 - Radial 1: Values & Ideology Radial 1 Pre-test rating for the radial Overall Radial Best Fit Rating Ratings for Individual Element of the Radial (n=10) Mission & Vision Core Values & Rules Accreditation Promotions & Publications Access Languages Mean 2.9 3 3.4 3 3.2 2.5 2.3 STDev 0.9 0.7 0.7 0.8 0.4 0.8 1 Significance of difference between means of pre-test and best-fit ratings (P<0.05) = No (P= 0.777) There was general agreement between pre-test and best-fit responses in this radial, and between volunteers, particularly with the elements of the mission & vision and of accreditation. As a highly mission-driven school, with recent IB programme evaluations having been completed, there was a strong understanding of the school’s policy documents, and of the IB standards and practices, which themselves largely align with the radials of the web. That the mission of the school is well understood is no surprise; it is used widely and largely adheres to language akin to the ‘globalization of internationalization’ (Cambridge 2001, 2002, 2003). More uncertainty was expressed when it came

×