Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

A broken paradigm? What education needs to learn from evidence-based medicine Lucinda McKnight & Andy Morgan.


Published on

A broken paradigm? What education needs to learn from evidence-based medicine Lucinda McKnight & Andy Morgan.


The paradigm of evidence-based education continues to inform the development of policy in a number of countries. At its simplest level, evidence-based education incorporates evidence, often that provided by randomised controlled trials, into classroom practice. England’s Education Endowment Foundation is in the process of exporting evidence-based school education, promoted as a medical approach, to other countries, including Australia. Australia is in the process of establishing an Education Evidence Base, informed by the government’s 2016 Productivity Commission report. While the literature around evidence-based education is explicit in identifying its basis in medicine, there has been little medical input into its development. Interdisciplinary examination of the medical literature reveals the contested nature and troubled state of evidence-based medicine and what policymakers need to consider to maximise the benefits of this translation into education.

Published in: Education
  • Be the first to comment

  • Be the first to like this

A broken paradigm? What education needs to learn from evidence-based medicine Lucinda McKnight & Andy Morgan.

  1. 1. Full Terms & Conditions of access and use can be found at Journal of Education Policy ISSN: 0268-0939 (Print) 1464-5106 (Online) Journal homepage: A broken paradigm? What education needs to learn from evidence-based medicine Lucinda McKnight & Andy Morgan To cite this article: Lucinda McKnight & Andy Morgan (2019): A broken paradigm? What education needs to learn from evidence-based medicine, Journal of Education Policy, DOI: 10.1080/02680939.2019.1578902 To link to this article: Published online: 15 Feb 2019. Submit your article to this journal Article views: 135 View Crossmark data
  2. 2. A broken paradigm? What education needs to learn from evidence-based medicine Lucinda McKnight a and Andy Morgan b a School of Education, Faculty of Arts and Education, Deakin University, Melbourne, Australia; b School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia ABSTRACT The paradigm of evidence-based education continues to inform the development of policy in a number of countries. At its simplest level, evidence-based education incorporates evidence, often that provided by randomised controlled trials, into classroom practice. England’s Education Endowment Foundation is in the process of exporting evidence-based school education, promoted as a medical approach, to other countries, including Australia. Australia is in the process of establishing an Education Evidence Base, informed by the government’s 2016 Productivity Commission report. While the literature around evidence-based education is explicit in identifying its basis in medicine, there has been little medical input into its development. Interdisciplinary examination of the medical literature reveals the contested nature and troubled state of evidence-based medicine and what policymakers need to consider to maximise the benefits of this translation into education. ARTICLE HISTORY Received 22 June 2018 Accepted 1 February 2019 KEYWORDS Evidence-based education; epistemology; feminism; diversity; medicine Introduction: the origins of evidence-based practice An evidence-based movement in education might be traced back to the Cochrane Collaboration’s evidence-based medical initiatives in 1993 (Shahjahan 2011). This movement includes a focus on behaviourist theory, quantitative research, randomised controlled trials, meta-analyses, ‘hard’ numerical data and high stakes standardised testing, as enacted by global advocates of the pervasive mantra of ‘what works’ (cri- tiqued by Biesta 2007), the ‘No Child Left Behind’ Act (Bush 2001) in the USA and ‘Visible Learning’ (Hattie 2009), originating in the Antipodes. Despite unintended negative effects (Darling-Hammond 2007) of these kinds of approaches, which ignore structural inequalities in pursuit of better outcomes, positivist versions of evidence- based education are increasingly dominating the field. In medicine, evidence-based practice is said to have begun with William Sackett and colleagues’ late 1980s exploration of the dissonance between two versions of science: 1) objective measurement and 2) the artistry of clinical decision making (Greenhalgh 1999; Govert, Hans, and Nijhof 2003). Yet both medicine, and more recently education, have CONTACT Lucinda McKnight Deakin University, 221 Burwood Highway, Burwood 3125, Australia JOURNAL OF EDUCATION POLICY © 2019 Informa UK Limited, trading as Taylor & Francis Group
  3. 3. drifted away from medicine’s original desire to value both science and art, to a positivist focus on rational inquiry that involves the separation of knower and knowledge and the creation of truths external to human relationships, whether in the consulting room or the classroom. Where Sackett expected clinicians to disagree and encouraged research- ing the degree and contexts of this diversity, evidence-based medicine (EBM) at its worst produces policy in the form of guidelines that are used as sticks with which to beat clinicians into uniformity of practice (Howick 2014). This article is not a critique of evidence-based practice per se, as doctors and teachers have always drawn on a range of evidence to inform practice, but an opportunity for education policymakers to connect with and learn from EBM, a movement potentially ‘in crisis’ (Greenhalgh, Howick, and Maskrey 2014, 1). The article departs from widespread and ‘trenchant’ (Connolly, Keenan, and Urbanska 2018, 1) criticisms of evidence-based practice emer- ging from the educational community, through its focus on the complexities of EBM. As a doctor and teacher thinking and writing together, we particularly draw on the work of Professor Trisha Greenhalgh of the University of Oxford’s Centre for Evidence-based Medicine, and her provocative questioning of whether this form of practice is ‘real vs rubbish’ (2015) or ‘broken’ (2015). While she has long recognised EBM’s ‘exciting oppor- tunities for improving patient outcomes’ (Greenhalgh 2002, 395) she has been vocal in warning about its misuse. We recognise that this may shock policymakers and educators who idealise medicine and borrow its aura to cultivate guru status in educational leader- ship (Eacott 2017). However, we believe that a more honest understanding of the ambiv- alences and failures of EBM is essential for education and ask, as our guiding research question, what education policymakers need to know about the contested nature of EBM. Our approach is interdisciplinary in that we aim to be ‘advancing understanding. . . in ways that would not have been thought possible through single disciplinary means’ (Mansilla and Gardner 2003, 3). In terms of interdisciplinary theory, we are united by entangled logics of both ontology (seeking to reconceptualise teacher-subjects within evidence-based frameworks) and accountability (for education policymakers to the public); we are similarly ‘driven by an agonistic or antagonistic relation to existing forms of disciplinary knowledge and practice’ (Barry, Born, and Weszkalnys 2008, 29) in relation to the evidence-based discourse now dominant in education. This article is not an examination of policy itself, or an exhaustive literature review of EBM, but the critical identification of key documents from the medical literature that should or could inform educational policy development around evidence. They are selected on the basis of wishing to demonstrate a range of diverse views on medical evidence-based practice beyond the assumption that it is a universal good; this puts to work the advice that thinking critically about policy requires ‘diverse concepts and theories’ (Ball 1993, 10). We also translate the issues raised by this literature into specific questions for education policymakers to consider, instigate discussion around reasons for the absence of educational engagement with medical literature, and provide recommendations for further research. Medical paradigms in education Prior to engaging with the medical literature, we note that the use of medical paradigms in education has been widely critiqued over decades. Medical and scientific approaches 2 L. MCKNIGHT AND A. MORGAN
  4. 4. to education have been critiqued for their: colonising force (Shahjahan 2011); negation of diverse epistemologies (Watkins 1993), bullying tendencies (Sleeter and Stillman 2005; Carlson 2014); elision of ethics (Brantlinger 2009); gendered bias (Harding 1986; Apple 1986; Lather 2004; Pierre and Adams 2004) and detrimental effects on educators and education (Hammersley 1997; Eisner [1967] 2017, [2001] 2017; Biesta 2007). This list is but a tiny portion of the carefully argued resistance to the medical in education. Yet the attempted imposition and/or importation of medical ideology and practice is extraordinarily resilient in the face of this critique and tends to ignore the history of debate, preferring to cyclically re-invent itself as a ‘new paradigm’ (Davies et al. 2015, 515). Such new paradigms inevitably herald increased rigor, yet conveniently sidestep the intellectual rigor that substantive engagement with the history of critique would require. Advocates have similarly been motivated to ‘disguise the imperfections of evidence-based medicine’ (Govert, Hans, and Nijhof 2003, 465). Proponents of evidence-based education (EBE) also fail to engage with the medical literature that could inform their initiatives. The Australian government’s Productivity Commission (2016a), for example, states blandly in relation to research methods of evidence-based medicine, that ‘such approaches are widely used in health research’ (17). Neither this overview, nor the full report, acknowledge EBM as a highly contested (Govert, Hans, and Nijhof 2003) and relatively recent paradigm, which has undergone three distinct stages (Greenhalgh 2015) since Sackett and colleagues’ initial interven- tions. It is to be deliberated whether this is ignorance or a conscious cover up. The transformation of the policy and practice of Australian education is to be based on the denial of the evidence to hand that evidence-based practice in medicine, while potentially enormously beneficial, can also lead to substantial harms (Greenhalgh 2018; Tonelli 1998). It has also evolved considerably since the positivist model that seems to appeal to education policymakers. Astonishingly, educators do readily admit that there is no evidence that evidence-based practice in education has any benefits (Burn and Mutton 2015), yet this does not lead to any delay in implementation of a particular and, as we will demonstrate, limited traditional medical-style agenda. Aims and theoretical resources We set out briefly here the central debates in EBM; we recognise that we are, in a sense, playing the EBE movement at its own game, using medicine to bolster an argument. Yet we seek to do this in an informed manner, based on expertise in both education and medicine and following careful reading of the literature achieved in tandem with interdisciplinary dialogue. We argue that this dialogue is essential so that EBE does not repeat the mistakes of EBM, and avoids detrimental effects on the lives of teachers and students, and the waste of public funds on flawed interventions. The absence of this dialogue in Australia’s policy environment and the failure to recognise the problems with EBM already constitute a level of disingenuity that requires explanation. The promotion of one particular, dated, and ideologically loaded version of EBM, without rationale for its choice, is similarly problematic. In our discussion, we also draw on several theoretical frameworks to develop our theories. We invoke Foucault (1972), and the concept that discourse puts particular cultural politics and power relationships to work through discursive traces relying on JOURNAL OF EDUCATION POLICY 3
  5. 5. a particular recourse to rationality. We think with Harding’s (1986) philosophical perspectives on the ways science is gendered, and reliant on an androcentric, coercive rationality. She writes that, in science: the focus on quantitative measures, variable analysis, impersonal and excessively abstract conceptual schemes is both a distinctly masculine tendency and one that serves to hide its gendered character. (Harding 1986, 105) This focus includes notions of ‘hard’ science and ‘rigor’ that are used to justify evidence based practice across disciplines, setting up binaries pitting reason against intuition: a key issue for EBM (Greenhalgh 2002) and potentially for education. We also think with Hall’s (1997) notion of the fetish, of looking at something yet failing, or refusing to actually see it, instead regarding something else entirely. In this instance, we argue educators claim to be looking at medicine, but are really focusing on, and deploying, its positivist discursive trappings as symbols of masculinist power. In the spirit of moving beyond critique to strategically address the problems we identify, we also include questions for education policymakers, that, while specific to the Australian timeframe for implementation, have resonance across all countries engaging with evidence-based practices in education. Issues in evidence-based medicine and questions for education policymakers EBM, while having well defined benefits, also carries specific and important limitations EBM is understood by Australian education policymakers as a scientific endeavour based on randomised controlled trials (RCTs). These are defined as studies: ‘in which a number of similar people are randomly assigned to two (or more) groups to test a specific drug, treatment or other intervention. One group (the experimental group) has the intervention being tested, the other (the comparison or control group) has an alternative intervention, a dummy intervention (placebo) or no intervention at all. . . Outcomes are measured at specific times and any difference in response between the groups is assessed statistically.’ (NICE 2018) This kind of research makes a particular knowledge claim representing a single school, that of medical epidemiology, or population health (Tonelli 1998), thereby essentialising science and relying on a problematic version of EBM (Greenhalgh 2015). This form of EBM establishes knowledge hierarchies (Zuiderent-Jerak, Forland, and Macbeth 2012) and in its strictest form, devalues intuition, clinical experience and physiological rationales, with the result that ‘the complex nature of sound clinical judgement is not appreciated’ (Tonelli 1998, 1234). Using this form of EBM to effect change risks an overemphasis on ‘epidemio- logical approaches’ (Grol 1997, 419) which are potentially co-opted for coercion, in contrast to educational, behavioural or social interactionist models of achieving change. There are other medical paradigms, such as narrative-based medicine (Launer 1999; Elwyn and Gwyn 1999; Zaharias 2018) and patient-centred medicine (Levenstein et al. 1986) that are much more closely aligned with education, and are used as a ‘medium for education’ (Greenhalgh and Hurwitz 1999, 49) within medicine. These paradigms, 4 L. MCKNIGHT AND A. MORGAN
  6. 6. however, and their research methodologies, involve the soft skills of listening, empathis- ing, reflecting, connecting, sharing, and understanding social and cultural contexts; in effect, these paradigms value the practitioner and patient ahead of the authority invested in system or science. These skills that education is choosing to deny are constructed as feminised and inferior: shaping a professional identity, under neoliber- alism, necessitates this process (McKnight 2016). This rejection of multiple ways of knowing, and negation of the ‘soft stuff’ (Greenhalgh 2015) such as tacit knowledge and practical intuition for the adulation of hard science, echoes abuses described in the long history of feminist critique of science. Masculinist epistemology has long had reason triumphing over ‘feminine’ nature (Shiva 1993, [1988] 2010) in a dualism stretching back to René Descartes and the mind/body split. Harding (1986) has traced the androcentrism of science through centuries, along with the gendered fear that emotion, subjectivity and the abstract (all culturally inscribed as feminine) might overwhelm man[sic]kind. The maintenance of hierarchies of knowledge has also been specifically identified as a contribution to the perpetuation of European colonialism and the ‘imperial mission’ (Shahjahan 2011, 184) of maintaining racial and cultural superiority over ‘natives’ perceived as wild and irrational. Any narrow version of EBM, based primarily on RCTs and meta-analyses, performs these fears, abuses and inequalities of power, positioning practitioners and patients (students) in deficit. Thankfully, EBM has now shifted to integrating non-evidentiary knowledge, or to a broader understanding of what may be considered evidence, such as clinical expertise and patient preferences (Tonelli 1998; Greenhalgh 2015). These alternative forms of knowledge are now considered not just desirable, but ‘necessary’ (Tonelli 1998, 1235) for the profession, raising the question of how this might apply to education. Related questions for education policymakers ● How will education learn from the mistakes of post-Sackett, positivist EBM, which devalued the clinician (teacher) and the wisdom and skills acquired through practice? ● How will education honour patient (student) perspectives ignored in early EBM? ● How can EBE move beyond replicating EBM, to model how teachers, students, parents and communities can lead in determining what kind of evidence matters? ● How will EBE seek to flatten Western/Northern knowledge hierarchies to achieve better relationships with Indigenous communities and welcome leadership by Indigenous elders in this area? ● For Australia, how is the way EBE is conceptualised conducive to reconciliation, especially considering the historical harms white Western/Northern medicine has visited on Australia’s Indigenous peoples? ● How can EBE, in an environment of resurgent social feminism, avoid the domina- tion of soft knowledge and skills by hard science? How can it avoid accusations of being dominated by colonialist and sexist epistemologies that do not value the knowledges and experiences of students? ● How can educators develop sophisticated, insightful and education-led ways of integrating different evidence in practice? JOURNAL OF EDUCATION POLICY 5
  7. 7. ● How can EBE support teachers in deciding when different forms of knowledge, such as experiential, emotional and opinion and value-based knowledges (Tonelli 1998), should take precedence in their decision-making? EBM changes the nature of medicine There is no doubt that EBM can save lives, for example by being able to demonstrate when whole-population screening is effective. However, in some forms of EBM, ‘the individuality of patients tends to be devalued [and] the focus of clinical practice is subtly shifted away from the care of individuals toward the care of populations’ (Tonelli 1998, 1234). Breast screening for cancer is an excellent example of this, with regular mammography saving some lives, while harming many individuals who test positive, but may never have needed the invasive treatment they subsequently receive (Marmot et al. 2013). Breast cancer screening has become highly controversial for women at low risk. The application of EBM, in the form of using guidelines based on RCTs, has been shown to inhibit individual patient care (Greenhalgh 2015, 2018). For example, a doctor who follows the population-based guideline for a hypothetical patient, Mr Zhu, that states he should be on a statin to prevent heart disease does not consider Mr Zhu’s preference not to take medication or be medicalised. Recommending that he takes the statin negates the patient’s deeply held values and compromises the interpersonal doctor/patient relationship. When benchmarking is applied to this situation, with financial rewards for doctors who comply, the situation becomes even more ethically fraught. Greenhalgh (2015), citing Bowker and Star, says, that in EBM: we create classification schemes. Once these become enshrined in guidelines, protocols etc. they ossify and reproduce our assumptions and prejudices (which now appear scientific). EBM, with its population emphasis, makes individuals and their needs harder to perceive, and to respect. Education has also identified the dangers of evidence-based authority dominating in and ossifying in systems (Eacott 2017). This is particularly noted in relation to the work of evidence-based academic Professor John Hattie (2009) whose influential treatise on Visible Learning, based on a meta-meta analysis of 800 meta-analyses of effective learning has also attracted critique of its bullying power in relation to teachers (McKnight and Whitburn 2018). The formalising and normalising of guidelines based solely on quantitative research are problematic for both medicine and education. As a result of EBM’s capacity to undermine individual care, clinicians are now being advised to use evidence more appropriately (Greenhalgh 2015), to tackle its ‘overall reductionism’ (Fava 2017, 3). EBM carries the danger that truths from RCTs will be ‘mechanistically applied’ to patients whose behaviour is ‘irredeemably contextual and idiosyncratic’ (Greenhalgh 1999, 324). A recent overview of RCTs conducted in educa- tion since 1980 has revealed that process evaluations, which may suggest contextual complications, are used only in a minority of trials (Connolly, Keenan, and Urbanska 2018, 13). This increases the risk that ‘what works’ may be applied to students without understanding how circumstances may alter their needs. 6 L. MCKNIGHT AND A. MORGAN
  8. 8. EBM is also said to standardise the patient, and the moral considerations made in the consultation (Govert, Hans, and Nijhof 2003) due to the distance created between clinician and guideline, such that diverse patients, including those with comorbidities (multiple medical problems), are assumed to conform to a simplistic ideal and treated accordingly. Reasoning from the particular (starting with the patient) to the general, becomes reasoning from the general (guideline) to the particular. This is a fundamental philosophical change that readily becomes ‘advanced rule following’ (Greenhalgh 2018, 6) instead of case-based reasoning. The pursuit of ‘rigor’ all too easily becomes rigidity (Govert, Hans, and Nijhof 2003) for those who practice ‘cookbook medicine’ (Tonelli 1998, 1239). Such imperatives pose a clear threat to teacher autonomy, and allow for the standardization of teachers’ work, an ongoing project (Lindstrom 2018) that benefits governments and corporate entities, but not necessarily teachers and students. As an antidote, EBM has now begun to ‘seek patient-based evidence from unfolding clinical conversations’ (Greenhalgh 2018, 6). Ironically, this is just as teachers are being told that RCTs and meta-analyses must come first to inform practice, rather than, for example, classroom action research involving conversations with students or educa- tion’s rich history of narrative inquiry, case study and ethnography. Questions for education policymakers ● How will EBE foreground and address the central philosophical tension between benefits to populations and benefits to individuals inherent to evidence-based practice? ● How will EBE protect students from the harms of standardization and the mechanistic application of guidelines? ● How will EBE support teachers to fight the ‘pressure to adhere’ (Greenhalgh 2018, 2) that the purported rigor of RCT-based guidelines effects? ● How will EBE respect diversity when we know prejudices inhere within guidelines? ● How will EBE recognise how intersectionality (multiple overlapping social cate- gorizations that may produce disadvantage) complicates the application of evi- dence-based guidelines? ● How can EBE ensure that patient (student)-based evidence is not subsumed in a knowledge hierarchy? ● How will EBE refuse the pressures and harms created by benchmarking in relation to evidence-based practice? Rcts are a much critiqued aspect of EBM Critiques of the dominance of RCTs and their resulting guidelines are central to the contested nature of EBM. Even in medicine the RCT carries a particular heft, but doctors, who are more exposed to big pharma and other medical industries than teachers, are aware that most RCTs are performed for the benefit of industry (Ioannidis 2016) not patients. RCTs are understood not as determining ultimate truths, but as producing facts that are always already loaded with theory (Greenhalgh 1999), which unfortunately is not acknowledged and explored. They are also coloured by bias and ideology. Expert clin- icians know that guidelines based on evidence from RCTs must be considered with JOURNAL OF EDUCATION POLICY 7
  9. 9. ‘scepticism’ (Greenhalgh 2018, 6). Sackett, the founding father of EBM, established for doctors how circumspect they must be in its application by insisting that they ask: Were the patients in this trial sufficiently similar to the patient in front of me (in whatever key respects) that I can safely apply the findings in this case’. . . ‘if not, piss on it’. (quoted in Greenhalgh 2015) By this he means that the doctor should use clinical judgement to decide whether to apply the guideline to a patient who may be quite different from those in the trial. Doctors are also aware that a large part of medical research simply does not inform guidelines. For example, research which does not provide favourable outcomes for funders may not be published. Rigid adherence to an evidence-based model means that poor evidence often has to suffice (Govert, Hans, and Nijhof 2003), with resulting deficiencies in patient care. RCTs deliberately and appropriately simplify, to try to establish a simple form of cause and effect. Yet most patients are much more compli- cated than those recruited and screened for trials. Caring for geriatric patients, for example, when most RCTs are conducted on younger people, means working in a forest of inappropriate guidelines. Doctors acknowledge the limitations of guidelines as being valuable only for certain populations. RCTs also tend to assume greater life expectancy as their desired outcome, whereas geriatric patients, for example, may well have different needs and priorities, such as quality of life (Govert, Hans, and Nijhof 2003). The solution to this, however, is not to conduct ever more RCTs on different popula- tions, but as Greenhalgh says, to use evidence more circumspectly. Instead, EBM is evolving, attempting to move away from earlier insistence (post- Sackett) that RCT-based evidence should trump all other evidence, to a re-valuation of individualised evidence from patients and patient-focused practice, and to case-based studies; this shift recognises that early EBM contained inherent biases against the patient (Greenhalgh 2015). Educators need to note this carefully, as this shift is towards the kinds of evidence traditionally valued in education prior to the arrival of a ‘medical’ approach. Yet this change in EBM faces the challenge of the bullying effects of RCTs, critics of which claim it to be ‘perpetuated by the arrogant’ (Sackett et al. 1996, 71). RCTs produce evidence that may be ‘presented in ways doctors and patients do not under- stand’ (Howick 2014) and also serve to ‘buttress eminence-based claims to prestige’ (Ioannidis 2016, 83) made by gurus. This is another clear danger in education, already noted in relation to particular evidence-based programs and their initiators (Eacott 2017; McKnight and Whitburn 2018). Clinicians risk being labelled ‘unscientific’ (Tonelli 1998, 1239) if they value other forms of reasoning and may fear litigation (Howick 2014). The ‘apparent rigor’ (Greenhalgh 2018, 2) of RCT-based guidelines pressures clinicians to comply. By doing so, they develop a kind of brute moral authority in stark contrast to the sophisticated moral reasoning of the skilled practitioner, with the result that moral freedom is restricted and discomfort created (Govert, Hans, and Nijhof 2003). This has been similarly noted when teachers are forced to implement mandated change in contradiction to their intuitive impulses (Adoniou 2012). Part of the bullying effect of RCT-based guidelines is their burgeoning number, such that they are both ‘unmanageable and unfathomable’ (Greenhalgh, Howick, and Maskrey 2014, 8 L. MCKNIGHT AND A. MORGAN
  10. 10. 2). Research has demonstrated that 3679 pages of national guidelines were related to hospital patient care in one unit over 24 hours (Greenhalgh, Howick, and Maskrey 2014). Clinicians and patients are potentially tyrannised when clinical management is ‘inappropriately driven by algorithmic protocols, top-down directives and population targets’ (5). With the best interests of students at heart, educators need to heed these warnings of the ways evidence-based practice can be misused. Questions for education policymakers ● How will policy seek to transparently foreground the ‘crude’ (Greenhalgh 2018, 2) limitations of RCTs while also suggesting their potential benefits? ● How will EBE ensure education remains student-centred, when EBM has been identified as involving specific threats to patient-centred practice (Greenhalgh, Howick, and Maskrey 2014)? ● How will EBE enshrine the principle of starting with the patient (student), not the guideline (Greenhalgh 2015)? ● How will EBE ensure that practitioner expertise is not trumped by ‘appeals to data’ (Tonelli 1998, 1234)? ● How will EBE ensure that RCT-based evidence remains servant rather than master (Greenhalgh 2018) in relation to teachers? ● How will teachers retain and/or acquire the authority to regard RCT type evidence- based guidelines with scepticism, and indeed, as Sackett (cited in Greenhalgh 2015) advises, to piss on them if they are deemed irrelevant? ● How will EBE uphold the maxim that ‘evidence can never directly dictate care’ (Tonelli 1998, 1239). ● How will EBE limit the glamourizing and fetishizing of ‘hard’ data, and model respect for diverse forms of evidence? EBM invites substantial conflict of interest Adherents of positivist EBM are said to ‘do anything to make us all believe that EBM works’ (Govert, Hans, and Nijhof 2003, 465). Not coincidentally, there has been ‘insufficient consideration of problems related to financial conflicts of interest’ (Fava 2017, 3). Even those supportive of entrepreneurship in medicine advise that RCTs and meta-analyses have been hijacked to ‘produce outcomes desirable for industry’ (Ioannidis 2016, 83). EBM involves evidence that is ‘flawed, out of date, conflicted and suffering from publication bias’ (Howick 2014), yet this evidence is routinely considered to be gold standard. It is known that ‘even studies with identical parameters have opposing results’ (Greenhalgh 2015) and that tiny effects can be misleadingly presented as having larger impact (Howick 2014) to serve particular interests. Drug and medical devices industries now set agendas by mis- appropriating and distorting evidence (Greenhalgh, Howick, and Maskrey 2014) rather than responding to doctors’ and patients’ needs. Considering these serious vulnerabilities, EBM now seeks to uphold the individual clinician’s judgement, based on a wide range of evidence including: intuition; tacit knowl- edge; clinical expertise; patient perspectives; reflection on in-depth case studies; heuristic reasoning; collaborative knowledge; awareness of context and practice philosophies such as shared-decision making (Greenhalgh, Howick, and Maskrey 2014, 3). Again ironically, this JOURNAL OF EDUCATION POLICY 9
  11. 11. sounds very much like what Australian teachers do in their everyday work, pre-EBE. This is now in danger of being subsumed beneath the requirement to follow the yellow brick gold of RCTs and meta-analyses to find ‘what works’, a crude and universalist concept that has been robustly critiqued in education literature (see for example Biesta 2007) as dogma and yet upheld by some as a kind of anti-intellectual banner. ‘What works’ for some is not necessarily ‘what works’ for others, no matter what RCTs may find. Questions for education policymakers ● In EBE, how will teachers, as the experts in and leaders of their profession, set agendas for research, or collaborate with students, parents and communities to set them? ● How will EBE manage the enormous vulnerabilities of evidence-based practice to conflicts of interest? How will these be identified and communicated to school communities and to the public? ● How will EBE support teacher innovation and student-centred practices such as the negotiation of the curriculum and pedagogy with students, when these are counter to the interests of other stakeholders? These include publishing companies who would prefer teachers to purchase scripted evidence-based programs and who peddle compliance with dominance as professionalism. ● How will EBE, through policy language and principles, recognise teacher work as creative synthesis rather than implementation? ● How will EBE ensure that the evidence claim in its title is fully understood by all stakeholders to include and value all forms of evidence? ● How will EBE ensure that those with conflicts of interest do not undermine teacher professionalism to serve their own financial aims, particularly through co-opting hard data to authorise and validate their marketing and professional learning programs? ● How will EBE eschew ‘what works’ for more complex, nuanced and better informed ways of thinking about teaching and learning? Discussion: so much to learn When Greenhalgh (2015) asks whether EBM is ‘broken’, she concludes that this is an irrelevant question. Instead, she wants doctors to ask ‘is the management of this patient in these circumstances an appropriate (real) or inappropriate (rubbish) application of the principles of EBM?’ She notes that after a generation of research into EBM, it remains ‘mired’ (Greenhalgh 2018, 6) in problems identified decades ago, in particular the enforcing of compliance with dubious evidence that has little relevance to an individual patient’s narrative and context. There is evidence that an integrative and patient-centred approach has better outcomes for patients (Greenhalgh 2015). Yet how can this be openly debated in education, considering the silencing effects of epidemiological medical discourse for doctors, let alone for teachers? How do educators avoid the early mistakes of EBM, when ‘naïve and opportunistic’ (Grol 1997, 42) application may have precluded deeper intellectual engagement with the challenges suggested here? How can educators avoid practitioner wisdom being obliterated by top- down managerialism? 10 L. MCKNIGHT AND A. MORGAN
  12. 12. We return to the uncritical adoption of the epidemiological model in education. Considering the extent of the medical literature, which we have only been able to touch on here in the form of a journal article, it would appear that this must be deliberate and serve other purposes. Research to establish why the medical ‘evidence-base’ of contested EBM has been ignored by policymakers is necessary and urgent. We hypothesise that the epidemiological model serves to establish domination and control of teaching as a feminised profession (Apple 1986), so that teachers are easily conscripted to provide numerical evidence of ‘progress’ for systems. This model also serves a broader policy agenda of centralising control and reducing teacher autonomy (Rowe and Skourdoumbis 2017) through an ‘uncritical ̶ and almost religious-like̶ belief in the unwavering power of data, metrics and evidence’ (3). This model, sourced from outside teachers’ comfort zones, also serves to keep them ‘ontologically insecure’ (Ball 2003, 220) and saps the confidence and solidarity required for resistance. It is extraordinary to consider that RCTs, having been little used in education, and with few educators in Australia having expertise in this methodology (Moss, cited in Productivity Commission 2016b, 219) should suddenly be touted as the gold standard for research in education. This is not a change instigated by the profession. It may be easy for policymakers to find superficial claims that all is well with EBM to counter our argument. For example, Stephen Margolis’ editorial claims that EBM is ‘undisputed’ and that patients are its ‘certain beneficiaries’ (Margolis 2018, 325) which is demonstrably incorrect. As yet another irony, some of the best research into the efficacy of EBM uses qualitative research methods such as autoethnography that would be de-valued in the Australian government’s proposed evidence hierarchy (Productivity Commission 2016b). Greenhalgh’s (2018) autoethnographic study of her own serious injury and dangerous, RCT-based treatment is an example. Yet practitioners of research methods such as autoethnography have been publically vilified as ‘self-absorbed c**t[s]’ (Campbell 2017), demonstrating the gendered bias and misogyny reserved for those practising ‘soft’ science. Real men, it is said, ‘do not collect soft data’ (Gherardi and Turner 1987, 1). Yet when, as might be predicted, top-down guidelines for EBE, despite their claim to bottom-up input, are not followed, only qualitative research will be able to find out why. It seems no coincidence that ethno- graphy should be denigrated just as it has been specifically identified as a research approach that urgently needs to be mobilised to help societies understand the symbolic violences of quantitative data cultures (Kitchin 2014). We argue that the failure to genuinely engage with the literature around EBM constitutes a fetishizing of medical power by educators and policymakers; they lust after white-coated precision and absolute authority, while failing to understand that medicine is much more complex than this. Undergraduate medical students, on the other hand, are taught to have a highly critical and nuanced understanding of EBM, considering the range of often conflicting guidelines available, and to weigh up the advantages and disadvantages of any advice. Medical students completing Monash University’s fourth year Evidence Based Medicine task, for example, are rewarded for identifying why the care of patients with complex needs and chronic illnesses should not necessarily be in alignment with guidelines. Medicine is understood as a profession dealing with uncertainty, and often involving clinical equipoise (Freedman 1987), which is the state in which there is no irrefutable basis for making clinical JOURNAL OF EDUCATION POLICY 11
  13. 13. choices, amidst the morass of doctor belief and experience, patient values and needs, diverse evidence, trials and guidelines. Equipoise recognises that often, a ‘best way’ or a ‘what works’ simply does not exist. Education has conveniently failed to engage with this complexity in medicine. The industry-driven medical guideline ‘factory’ (Ioannidis 2016, 82) creates as many problems for doctors as it solves. Even when guidelines are produced by independent, non-industry funded organisations, problems can result. In relation to asthma, for example, a common disease with straightforward management goals, at least four conflicting sets of guidelines re which drugs to use have led to confusion and a sub- industry dictating how to work with contradictory advice Curtis et al., in a recent UK-based ‘hot topics’ course for general practitioners conclude that as a result of these publications ‘we now have guidelines that compete in several key areas and ultimately less clarity on how to optimally manage asthma than we had before’ (2018, 244). The idea that medical-style research will have the answers to ‘what works’ for education is, quite simply, a fiction. Medical research based RCTs and meta-analyses cannot always answer, in a straightforward way, ‘what works’ for med- icine. Medical guidelines are always changing, always being contradicted and updated, always serving different interests. Even Professor John Hattie, global purveyor of meta-meta-analyses for education, admits that his research is just one ‘explanatory story’ (Hattie 2009, 22) and that one of its key limitations is that it includes only quantitative data (Hattie 2009). His research, carrying the weight of datified and medicalised mystique, has been implemented in ways more like Govert, Hans, and Nijhof (2003) rigidity than rigor, such that Hattie has now distanced himself from these endeavours, and even from the term ‘evidence-based’, which he now perceives to denote a lack of critical thinking (see Knudsen 2017, 256). The danger of policy creating ‘cookbook’ (Sackett et al. 1996, 72) teachers, who will teach to the guideline as well as to the test has already been mapped by medicine. Meanwhile, Hattie’s statistical method has been critiqued as pseudo-science (Bergeron 2017), highlighting the dangers of teachers having to implement policy based on statistical manoeuvres they are not qualified to evaluate. Hattie has also run into trouble with his medical examples, which he uses to bolster the validity of small effect sizes. He cites the example of aspirin preventing heart attacks in healthy people, giving statistics that lead to his conclusion that taking aspirin ‘sounds worth it to me’ (2009, 9). This does not, however, take into account the harms that taking aspirin can induce, which include serious gastrointestinal bleeding and haemor- rhagic strokes. While there was a vogue for advising healthy people to take aspirin, it is now believed that the harms outweigh the benefits (Baigent et al. 2009). Hattie’s ‘truth’ is revealed as a limited and quickly outdated meta-analysis ‘fact’ that does not take into account the more complex picture, which was already emerging at the time he was writing: the study cited above was published in the same year he used aspirin as an example of a scientifically validated intervention. Statistics, when used uncritically, potentially cause harm. How will teachers be positioned in this numbers game? How many students will have to ‘take aspirin’, based on some effect size, before new statistics prove this to be harmful? Educational practice must be based on much more than ever-changing RCTs and meta-analyses. In medicine, over five years, fifty percent of guideline recommendations are overturned by new evidence (cited in 12 L. MCKNIGHT AND A. MORGAN
  14. 14. Glasziou, Del Mar, and Preston 2017, 6). A comparable situation in education would create unimaginable turmoil for teachers. Conclusion: recommendations for further research Despite noting that hierarchies of quality in evidence are ‘contested’ (Productivity Commission 2016b, 76), the Australian government’s Productivity Commission report into establishing an evidence base for education concludes: The commission supports investment in high quality research, particularly randomised controlled trials in tandem with process evaluations, to further develop the Australian education evidence base on what works best (2016, 212). Yet even process evaluations of RCTs, which do offer some hope, have yet to be proven to adequately account for context and complexity (Connolly, Keenan, and Urbanska 2018, 16). Taking all that we have explained above into account, it is vital for the Australian government and all other systems, in developing policy in this area, to move towards a fuller and more consistent understanding of the benefits and costs of evidence-based practice. Rather than removing teacher agency and enforcing subordination to data- based cults (Eacott 2017), EBE needs to embrace a wide range of evidence. This is to acknowledge that all research acts, including those purporting to be ‘science’, are social acts (Harding 1986) and that data are never neutral, but are always framed ‘economic- ally, ethically, temporally, spatially and philosophically’ (Kitchin 2014, 23). There are always interests at work in data. Rather than privileging RCTs, governments need to fund much needed research that seeks conceptual and philosophical approaches to understanding data (Kitchin 2014) and sets out to ascertain how practitioners successfully integrate a range of evidence to inform their work (Tonelli 1998). As in medicine, RCTs need to form a vital but not dominant component of the evidence teachers interpret. This process itself needs to be understood as much more than the Productivity Commission’s much used ‘application’ and ‘implementation’, in relation to evidence. Mere deference to science is no substitute for actually engaging with the literature that shows EBM itself to be a contested and evolving field that is, ironically, trying to be more ‘educational’ in its approaches, for example by valuing patient narratives and perspectives, and intimate case studies. There is no pure, superior version of scientifi- cally produced truth in the form of the RCT or meta-analysis, as medicine well knows. EBM has not yet fixed the problems it set out to solve (Greenhalgh, Howick, and Maskrey 2014), despite decades of trying. There is no excuse for education to continue to pretend that this superior truth, and a matching, neat, statistical evidence-based solution exist. Sackett and colleagues warned at the outset that EBM ‘is not restricted to randomised controlled trials and meta-analyses’ (1996, 72); failure to recognise the limits of these ‘may lead to unex- pected and untoward consequences’ (Tonelli 1998, 1235). A sophisticated understand- ing of medicine posits it as ‘more like casuistry than science’ (1239), as ‘moral knowing, a narrative, interpretive, practical reasoning’ (Hunter cited in Greenhalgh and Hurwitz 1999, 50) not the uncritical application of RCT-based evidence. The latter is considered JOURNAL OF EDUCATION POLICY 13
  15. 15. to be ‘only a restrictive interpretation’ (Fava 2017, 3) even of a scientific approach to clinical practice. The educational community needs energetic and robust debate around the best ways to incorporate RCT-based evidence into practice, as instigated by Greenhalgh in relation to medicine. We hope the insights developed in this article serve to demonstrate how genuine interdisciplinary dialogue can inform policy analysis. While we recognise that we ourselves each belong to heterogeneous disciplines, and that we are not representative voices, our generative sharing of discipline-specific literature and theory attempts to ‘explain phenomena, craft solutions [and] raise new questions’ (Mansilla and Gardner 2003, 3). More of this kind of complementary critical dialogue is needed in the process of any cross-disciplinary policy translation of concepts and initiatives. In this instance, for example, we have identified how risks such as the perpetuation of sexist and racist ideologies through scientism accompany uncritical adoption. In the field of education, teachers work with multiple cultural fictions, with their own stories and values, and those of their students, parent body, school, communities and discipline to design rich, meaningful and effective learning experiences. For policy- makers to judge and portray this complex, creative and highly personalised work as in deficit and in need of positivist medical intervention, is to misrepresent this highly skilled profession. The rationale for such a judgement seemingly lies with an agenda to: de-professionalise teachers for the purposes of centralised control; gain political advan- tage through the manipulation of data and, perhaps most dangerously, to enable the service of vested interests that has blighted evidence-based medicine. This is an agenda that is hostile to teachers and students, and ultimately to the health of education. Disclosure statement No potential conflict of interest was reported by the authors. Notes on contributors Dr Lucinda McKnight is a pre-service teacher educator and senior lecturer in pedagogy and curriculum at Deakin University, Melbourne. She is also a qualified health and fitness professional. She has a published track record of research in the use of scientific metaphor in education. Dr Andy Morgan is a British Australian medical doctor and senior lecturer in general practice at Monash University, Melbourne. He has an MA in Clinical Education from the Institute of Education, UCL, London. His research interests are in consultation skills and patient-centred care. He is a former fellow of the Royal College of General Practitioners, and current fellow of the Australian Royal College of General Practitioners. ORCID Lucinda McKnight Andy Morgan 14 L. MCKNIGHT AND A. MORGAN
  16. 16. References Adoniou, M. 2012. “Autonomy in Teaching: Going, Going. . .” English in Australia 47 (3): 78–86. Apple, M. 1986. Teachers and Texts; A Political Economy of Class and Gender Relations in Education. New York: Routledge. Baigent, C., L. Blackwell, R. Collins, J. Emberson, J. Godwin, R. Peto, J. Buring, et al. 2009. “Aspirin in the Primary and Secondary Prevention of Vascular Disease: Collaborative Meta-Analysis of Individual Participant Data from Randomised Trials.” Lancet 373 (9678): 1849–1860. doi:10.1016/s0140-6736(09)60503-1. Ball, S. 2003. “The Teacher’s Soul and the Terrors of Performativity.” Journal of Education Policy 18 (2): 215–228. doi:10.1080/0268093022000043065. Ball, S. J. 1993. “What Is Policy? Texts, Trajectories and Toolboxes.” Discourse: Studies in the Cultural Politics of Education 13 (2): 10–17. doi:10.1080/0159630930130203. Barry, A., G. Born, and G. Weszkalnys. 2008. “Logics of Interdisciplinarity.” Economy and Society 37 (1): 20–49. doi:10.1080/03085140701760841. Bergeron, P. 2017. “How to Engage in Pseudoscience with Real Data: A Criticism of John Hattie’s Arguments in Visible Learning from the Perspective of A Statistician.” McGill Journal of Education 52: 1. doi:10.7202/1040816ar. Biesta, G. 2007. “Why ‘What Works’ Won’t Work: Evidence-Based Practice and the Democratic Defecit in Education Research.” Education Theory 57 (1): 1–22. doi:10.1111/j.1741- 5446.2006.00241.x. Brantlinger, E. 2009. “Impediments to Social Justice: Hierarchy, Science, Faith and Imposed Identity (Disabilty Classification).” In Handbook of Social Justice in Education, edited by W. Ayers, T. M. Quinn, and D. Stovall, 400–416. New York: Routledge. Burn, K., and T. Mutton. 2015. “A Review of ‘Research-Informed Clinical Practice’ in Initial Teacher Education.” Oxford Review of Education 41 (2): 217–233. doi:10.1080/ 03054985.2015.1020104. Bush, G. 2001. No Child Left Behind. Washington, DC: Department of Education. Campbell, E. 2017. “Apparently a Self-Absorbed C**T Is Now Academically Lauded”: Experiencing Twitter Trolling of Autoethnographers.” Forum: Qualitative Research 18 (3). doi:10.17169/fqs-18.3.2819. Carlson, D. 2014. “The Bully Curriculum: Gender, Sexualities and the New Authoritarian Populism in Education.” In Gender and Sexualities in Education: A Reader, edited by E. Meyer and D. Carlson, 175–187. New York: Peter Lang. Connolly, P., C. Keenan, and K. Urbanska. 2018. “The Trials of Evidence-Based Practice in Education: A Systematic Review of Randomised Controlled Trials in Education Research 1980–2016.” Educational Research 60 (3): 276–291. doi:10.1080/00131881.2018.1493353. Curtis, S. 2018. "Hot Topics GP Update Course." Medcast. Accessed February 11. https:// Darling-Hammond, L. 2007. “Race, Inequality and Educational Accountability: The Irony of ‘No Child Left Behind’.” Race Ethnicity and Education 10 (3): 245–260. doi:10.1080/ 13613320701503207. Davies, M., B. D. Larissa, F. Rickards, S. Dinham, J. Conroy, and R. Davis. 2015. “Teaching as a Clinical Profession: Translational Practices in Initial Teacher Education- an International Perspective.” Journal of Education for Teaching 41 (5): 514–528. doi:10.1080/02607476.2015.1105537. Eacott, S. 2017. “School Leadership and the Cult of the Guru: The neo-Taylorism of Hattie.” School Leadership & Management 37 (4): 413–426. doi:10.1080/13632434.2017.1327428. Eisner, E. [1967] 2017. “Educational Objectives- Help or Hindrance?” In The Curriculum Studies Reader, edited by D. J. Flinders and S. J. Thornton, 129–135. New York: Routledge. Eisner, E. [2001] 2017. “What Does It Mean to Say a School Is Doing Well?” In The Curriculum Studies Reader, edited by D. J. Flinders and S. J. Thornton, 313–321. New York: Routledge. Elwyn, G., and R. Gwyn. 1999. “Stories We Hear and Stories We Tell: Analysing Talk in Clinical Practice.” BMJ 318 (7177): 186–188. doi:10.1136/bmj.318.7177.186. JOURNAL OF EDUCATION POLICY 15
  17. 17. Fava, G. A. 2017. “Evidence-Based Medicine Was Bound to Fail: A Report to Alvan Feinstein.” Journal of Clinical Epidemiology 84: 3–7. doi:10.1016/j.jclinepi.2017.01.012. Foucault, M. 1972. The Archaeology of Knowledge. New York: Pantheon Books. Freedman, B. 1987. “Equipoise and the Ethics of Clinical Research.” The New England Journal of Medicine 317 (3): 141–145. doi:10.1056/nejm198707163170304. Gherardi, S., and B. Turner. 1987. Real Men Don’t Collect Soft Data. Edited by University of Trento. Trento: Dipartimento di Politica Sociale, University of Trento. Glasziou, P., C. Del Mar, and S. Preston (2017). “Navigating the Maze: Better Searching, Sifting and Critical Analysis of Research.” Royal Australian College of General Practitioners Evidence- based Practice ‘Train the Trainer’ National Workshop. Govert, V., A. Hans, and A. Nijhof. 2003. “Fundamental Shortcomings of Evidence-Based Medicine.” Journal of Health Organization and Management 17 (6): 463–471. doi:10.1108/ 14777260310506614. Greenhalgh, T. 1999. “Narrative Based Medicine in an Evidence Based World.” BMJ 318 (7179): 323–325. doi:10.1136/bmj.318.7179.323. Greenhalgh, T. 2002. “Intuition and Evidence: Uneasy Bedfellows?” British Journal of General Practice 52: 395–400. Greenhalgh, T. 2015. “Real Vs Rubbish EBM.” YouTube. Accessed 22 June. com/watch?v=qYvdhA697jI Greenhalgh, T. 2018. “Of Lamp Posts, Keys, and Fabled Drunkards: A Perspectival Tale of 4 Guidelines.” Journal of Evaluation in Clinical Practice 1–7. doi:10.1111/jep.12925. Greenhalgh, T., and B. Hurwitz. 1999. “Why Study Narrative?” BMJ 318 (7175): 48–50. doi:10.1136/bmj.318.7175.48. Greenhalgh, T., J. Howick, and N. Maskrey. 2014. “Evidence-Based Medicine- a Movement in Crisis?” British Medical Journal 348: 1–7. doi:10.1136/bmj.g3725. Grol, R. 1997. “Beliefs and Evidence in Changing Clinical Practice.” British Medical Journal 315: 418–421. Hall, S. 1997. “The Spectacle of the Other.” In Representation: Cultural Representations and Signifying Practices, edited by S. Hall, 223–290. London: SAGE/The Open University. Hammersley, M. 1997. “Educational Research and Teaching: A Response to David Hargreaves’ TTA Lecture.” British Educational Research Journal 23 (2): 141–161. doi:10.1080/ 0141192970230203. Harding, S. 1986. The Science Question in Feminism. Maidenhead, UK: Open University Press. Hattie, J. 2009. Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement. Abingdon: Routledge. Howick, J. 2014. “Rethinking Evidence-Based Medicine: From Real to Rubbish.” Centre for Evidence-based Medicine University of Oxford Accessed June 22. 01/rethinking-evidence-based-medicine-from-rubbish-to-real/ Ioannidis, J. P. 2016. “Evidence-Based Medicine Has Been Hijacked: A Report to David Sackett.” Journal of Clinical Epidemiology 73: 82–86. doi:10.1016/j.jclinepi.2016.02.012. Kitchin, R. 2014. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: Sage. Knudsen, H. 2017. “John Hattie: I’m a Statistician, I’m Not a Theoretician.” Nordic Journal of Studies in Educational Policy 3 (3): 253–261. doi:10.1080/20020317.2017.1415048. Lather, P. 2004. “This IS Your Father’s Paradigm: Government Intrusion and the Case of Qualitative Research in Education.” Qualitative Inquiry 10 (1): 15–34. doi:10.1177/ 1077800403256154. Launer, J. 1999. “A Narrative Approach to Mental Health in General Practice.” BMJ: British Medical Journal 318 (7176): 117–119. doi:10.1136/bmj.318.7176.117. Levenstein, J. H., E. C. McCracken, I. R. McWhinney, M. A. Stewart, and J. B. Brown. 1986. “The Patient-Centred Clinical Method. 1. A Model for the Doctor-Patient Interaction in Family Medicine.” Family Practice 3 (1): 24–30. doi:10.1093/fampra/3.1.24. Lindstrom, P. N. 2018. “The Pendulum of Standardization: An English Journal Retrospective.” English Journal 107 (5): 44–50. 16 L. MCKNIGHT AND A. MORGAN
  18. 18. Mansilla, V. B., and H. Gardner. 2003. "Assessing Interdisciplinary Work at the Frontier: An Empirical Exploration of 'Symptoms of Quality'". CNRS Society of Information Rethinking Interdisciplinarity Seminar, December 1. Margolis, S. 2018. “Evidence-Based Medicine.” Australian Journal of General Practice 47 (6): 325. Marmot, M. G., D. G. Altman, D. A. Cameron, J. A. Dewar, S. G. Thompson, and M. Wilcox; U. K. Panel on Breast Cancer Screening The Independent. 2013. “The Benefits and Harms of Breast Cancer Screening: An Independent Review: A Report Jointly Commissioned by Cancer Research UK and the Department of Health (England) October 2012.” British Journal of Cancer 108 (11): 2205–2240. doi:10.1038/bjc.2013.177. McKnight, L. 2016. “Meet the Phallic Teacher: Designing Curriculum and Identity in a Neoliberal Imaginary.” Australian Educational Researcher 43 (4): 473–486. doi:10.1007/ s13384-016-0210-y. McKnight, L., and B. Whitburn. 2018. “Seven Reasons to Question the Hegemony of Visible Learning.” Discourse: Studies in the Cultural Politics of Education. doi:10.1080/01596 306.2018.1480474. National Institute for Health and Care Excellence. 2018. "Glossary: Randomised Controlled Trial." Accessed February 11 2018. Pierre, S., and E. Adams. 2004. “Refusing Alternatives; A Science of Contestation.” Qualitative Inquiry 10 (1): 130–139. doi:10.1177/1077800403259494. Productivity Commission. 2016a. National Evidence Base Productivity Commission Inquiry Report. Canberra: Productivity Commission. Productivity Commission. 2016b. National Education Evidence Base: Productivity Commission Inquiry Report: Overview and Recommendations. Edited by Productivity Commission. Canberra: Australian Government. Rowe, E. E., and A. Skourdoumbis. 2017. “Calling For’urgent National Action to Improve the Quality of Initial Teacher Education’: The Reification of Evidence and Accountability in Reform Agendas.” Journal of Education Policy 1–17. doi:10.1080/02680939.2017.1410577. Sackett, D. L., W. M. C. Rosenberg, R. Murr Grey, B. Haynes, and W. S. Richardson. 1996. “Evidence-Based Medicine: What It Is and What It Isn’t.” British Medical Journal 312: 71–72. Shahjahan, R. A. 2011. “Decolonizing the Evidence-Based Education and Policy Movement: Revealing the Colonial Vestiges in Educational Policy, Research, and Neoliberal Reform.” Journal of Education Policy 26 (2): 181–206. doi:10.1080/02680939.2010.508176. Shiva, V. [1988] 2010. Staying Alive: Women, Ecology and Development. New York: South End Press. Shiva, V. 1993. Monocultures of the Mind: Perspectives on Biodiversity and Biotechnology. London: Zed Books. Sleeter, C., and J. Stillman. 2005. “Standardizing Knowledge in a Multicultural Society.” Curriculum Inquiry 35 (1): 27–45. doi:10.1111/j.1467-873X.2005.00314.x. Tonelli, M. R. 1998. “The Philosophical Limits of Evidence-Based Medicine.” Academic Medicine 73: 1234–1240. Watkins, W. H. 1993. “Black Curriculum Orientations: A Preliminary Inquiry.” Harvard Educational Review 63 (3): 321–338. doi:10.17763/haer.63.3.26k2433r77v631k2. Zaharias, G. 2018. “Learning Narrative-Based Medicine Skills.” Narrative-Based Medicine 3 64 (5): 352–356. Zuiderent-Jerak, T., F. Forland, and F. Macbeth. 2012. “Guidelines Should Reflect All Knowledge, Not Just Clinical Trials.” BMJ: British Medical Journal 345. doi:10.1136/bmj.e6702. JOURNAL OF EDUCATION POLICY 17