Explicable Artifical Intelligence for
Education (XAIED)
DR. ROBERT FARROW
ROB.FARROW@OPEN.AC.UK
CALRG ANNUAL CONFERENCE 2022
2
Explainable AI (XAI) + Artificial Intelligence in Education (AIED) =
Explainable Artificial Intelligence in Education (XAIED)
3
WHAT IS IT?
ARTIFICIAL INTELLIGENCE
• The mechanical
simulation of human
agency, intelligence,
and perception
• The use of machines
to perform tasks that
have traditionally
been performed by
natural intelligence
• Involves a
constellation of
technologies,
including machine
learning; natural
language
processing; speech
recognition
4
THE VISION
ARTIFICIAL INTELLIGENCE
• Predicted to disrupt
human society and
productivity as a ‘4th
Industrial Revolution’
(Schwab, 2016)
• Solutions to
problems: repetitive
tasks; managing
risks; increase
affordability;
innovation;
accessibility;
efficiencies;
enhanced cognition
• Market expected to
be worth $126 billion
by 2025 (Statista,
2020)
CC-BY Emily Spratt
https://en.m.wikipedia.org/wiki/File:Alain_Passard_AI_Art.png
7
WHAT DOES THE FUTURE HOLD?
AI IN EDUCATION
The application of AI in education is increasing…
According to the AI in Education Market Research Report (2020), the global
market reached $1.1 billion in 2019 and is predicted to generate $25.7 billion
in 2030
Thousands of institutions are already using AI
technologies to shape and plan their delivery
of education (Zawacki-Richter et al., 2019;
Luckin et al., 2016; Dignum, 2021)
Drivers:
• Demand for personalised learning
• Technological sophistication
• Educational infrastructure
• Specialisation
• AI literacy
• Covid-19? (nb. Heaven, 2021; Everett, 2021)
8
EXAMPLES OF AI IN EDUCATION
AI IN EDUCATION
Algorithmic decision making –who to enrol, how to support their learning
Analytics – learning, social, emotional
Automated assessment/feedback– quizzes, writing analytics,
Delegation of administrative tasks – freeing up time for learning & teaching
Knowledge management – making better use of data, making connections
Nudge autonomy – prompting stakeholders to take actions at appropriate times
Predictive analytics – modelling different scenarios
Simulations & practical experience – authentic learning experiences
Student support – AI tutoring, chatbots, accessibility
VLE / UX – personalised interfaces
9
TWO PHILOSOPHICAL ARGUMENTS AGAINST STRONG AI
Turing, A. (1950). Computing
Machinery and Intelligence. Mind,
LXI, 236.
Searle, John (1980). Minds, Brains and
Programs. Behavioral and Brain Sciences,
3 (3): 417–457
EXPLICABLE AI IN EDUCATION
10
MACHINE LEARNING AND GENERALISED AI
EXPLICABLE AI IN EDUCATION
Jang, E. (2022). All Roads Lead to Rome: The Machine Learning Job Market in 2022. Eric Jang. https://evjang.com/2022/04/25/rome.html
11
ETHICAL ISSUES AND EXPLICABILITY
EXPLICABLE AI IN EDUCATION
All algorithms in machine learning are black boxes to some extent (or at least ‘grey boxes’)
Deep learning algorithms and neural networks recognise patterns over massive data sets,
but reconstructing these is problematic
In addition, outputs from ML systems also require interpretation
Where does algorithmic accountability lie?
CC BY https://commons.wikimedia.org/wiki/File:Blackbox3D-withGraphs.png
12
ETHICAL ISSUES AND EXPLICABILITY: SURVEILLANCE
EXPLICABLE AI IN EDUCATION
Many of the anticipated uses of AIED rely on the
assumption that mass data collection and analysis
will take place.
This can include data about learner progress
through a virtual learning environment; but may
include tracking biometric data, taking voice
samples, and using eye-tracking software (Luckin,
2016:34).
Already there is considerable reliance on the use of
controversial tracking technologies in proctoring and
assessment (Coughlan et al., 2021).
The scale and penetration of machine learning data
collection can be unsettling: a recent study found
that 146 of 164 EdTech products recommended,
mandated or procured by governments during the
Covid-19 pandemic harvested the data of millions of
children (Human Rights Watch, 2022).
13
ETHICAL ISSUES AND EXPLICABILITY: BIAS
EXPLICABLE AI IN EDUCATION
Algorithmic bias has been the focus of
many critiques of AI (e.g. Baker & Hawn,
2021; Birhane et al., 2022; Noble, 2018;
Samuel, 2021; Wachter, 2022; Zuboff,
2019).
CC BY NC SA https://www.flickr.com/photos/lwr/2222227513
14
EXPLICABLE AI (XAI) AS ETHICAL RESPONSE
Zawacki-Richter et al. (2019) suggest that ethics is weakly represented in contemporary
discourse around AI
Birhane et al. (2022) argue that although AI ethics is a rapidly growing field it cannot keep
pace with the rapid development and rollout of AI systems into all parts of society, and as a
result most work in this area is shallow
The United Nations has called for a moratorium on the sale and use of AI on the basis of
risk to human rights (United Nations, 2021). In the most recent recommendations made by
the UN High Commissioner to member states there is a call to ban any applications that
cannot be run in full compliance with human rights legislation (ibid., 15).
The emerging consensus is that there needs to be adequate transparency and explicability
for the use of algorithms (Floridi et al., 2018; Gunning et al., 2019; Kiourti et al., 2019;
Panigutti et al., 2020).
Explicability is intended to make it easier to reconstruct actions taken by AI programs and
to show who might be responsible for consequences.
Páez (2019): ‘The task ahead for XAI is thus to fulfil the double desiderata of finding the
right fit between the interpretative and the black box model, and to design interpretative
models and devices that are easily understood by the intended users.’
EXPLICABLE AI IN EDUCATION
15
AI4PEOPLE ETHICAL FRAMEWORK
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard
Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
EXPLICABLE AI IN EDUCATION
The AI4People initiative synthesizes 47 sets of guidelines to four traditional ethical
principles and proposes one new AI-specific principle.
16
APPLYING THE AI4PEOPLE ETHICAL FRAMEWORK
Beneficence
• Education as a common good
• Extending educational opportunity – but are some thereby excluded?
• AI efficiencies may not improve pedagogical quality
Non-maleficence
• Managing risk, avoiding misuse (n.b. YouTube)
• Mistakes happen at scale
• What happens when things go wrong? Who has oversight?
• Algorithms take time/data to calibrate – what happens to the life chances of those
who pass through the system while this is happening?
• Risk of devaluing human labour and contribution
Autonomy
• Balancing the rights and privacies of learners with the potential pedagogical benefits
for them
• Nudged rather than delegated autonomy
• Processes for retrieving decision-making powers
EXPLICABLE AI IN EDUCATION
17
APPLYING THE AI4PEOPLE ETHICAL FRAMEWORK
AI IN EDUCATION
Justice
• Algorithmic system bias
• How could we resolve competing claims to justice?
• Preventing new harms
• Can we reduce careful judgements to algorithms or decision trees?
Explicability
• “How does it work?”
• “Who is responsible for the way it works?”
• A route to building trust in AI
18
CRITIQUE OF EXPLICABILITY
EXPLICABLE AI IN EDUCATION
• Introducing XAI features can add significantly to the cost of systems (Antoniadi et al.,
2021)
• Páez (2019) argues that explicability remains a vague and under-theorised term with no
definitive meaning
• Robbins (2020) argues that the requirement for explicability is misplaced on the basis
that many uses for AI are low risk and don’t require explication; in some cases the need
to provide explication could prevent the advantages of AI being realised (““a principle of
explicability for AI makes the use of AI redundant”)
• Jiang et al. (2022) argue that XAI can overwhelm and introduce epistemic uncertainty
which can undermine productive use of AI systems
• Gilpin et al. (2018) similarly argue that existing approaches to XAI are insufficient
(especially in the case of deep neural networks)
• Approaches to understanding explicability are typically siloed and would benefit from
greater interdisciplinarity (Dignum, 2021)
• Technical transparency is insufficient for complex decision making (which must be
augmented by social, technological and organisational dimensions). Hence, a socially
situated XAI needs to prioritise the human-AI assemblage over the AI itself (Ehsan et al.
(2021)
19
EXPLICABILITY AS INTERPRETABILITY / FIDELITY
EXPLICABLE AI IN EDUCATION
This typology proposed by Markus et al. (2021) distinguishes interpretability which is human readable
and fidelity which is the accurate, technical description of what happens in the ‘black box’. The technical
explanation of an algorithm might include things like exploratory or statistical analysis; evaluation of
machine learning models; periodic iterations of concepts and validation of results; user testing; and
producing documentation for datasets and models.
For the general stakeholders lacking expert knowledge such transparency presumably has limited value
without a trusted broker who can interpret on their behalf.
20
SOCIO-TECHNICAL AI RISKS AND AMELIORATIONS (BASED ON SELBST ET AL., 2018)
EXPLICABLE AI IN EDUCATION
21
PRACTICAL STEPS
AI IN EDUCATION
Floridi et al. (2018) recommend action points for education:
• Incentivise (through finance and regulation) zones for testing and developing AI
• Support the creation of educational curricula and public awareness activities around the
societal, legal, and ethical impact of AI
• School curricula to include computer science
• Qualification programmes to educate employees on societal, legal, & ethical impact
of working with AI
• Include ethics and human rights in scientific and engineering curricula
• Develop educational programmes for the public at large
• Engage with wider initiatives such as UN’s sustainable development goals
Whittlestone et al. (2019) suggest a roadmap for going ‘beyond principles’:
• Uncovering and resolving the ambiguity inherent in commonly used terms, such as
privacy, bias, and explainability
• Identifying and resolving tensions between the ways technology may both threaten and
support different values
• Building a more rigorous evidence base for discussion of ethical and societal issues
Bulathwela et al. (2019) propose diversity and dialogue to ‘collectively design a global
education revolution that will help us solve educational inequity’ by addressing the political
and social context which engenders unequal access to quality education.
22
XAITK TOOLKIT
EXPLICABLE AI IN EDUCATION
The Explainable AI Toolkit (XAITK) contains a variety of tools and resources to
help users, developers, and researchers understand complex machine learning
models. The toolkit combines a searchable repository of independent
contributions and a more integrated, common software framework. The toolkit
was developed under the Defense Advanced Research Projects Agency
(DARPA) Explainable Artificial Intelligence (XAI) program.
23
THE LIMITS OF TRANSPARENCY?
EXPLICABLE AI IN EDUCATION
• Pedagogial processes are not always transparent (especially to learners) –
does XAI threaten to disrupt this?
• ‘Intepretable’ looks different to different kinds of stakeholders – does this
mean they need their own windows into algorithms?
• What if learning is improved by not understanding the AI models?
• Tong et al. (2021) found that AI feedback can be of higher quality but also
perceived negatively by learners
• Protection of algorithms on the basis of commerical interest – but note that
even trade secrets can be publicly audited under the right protocols (Morten,
2022)
• More broadly, any exemption from transparency is in tension with the concept
of XAIED
24
FOUR CHALLENGES FOR XAI (BASED ON FELTEN, 2017, CITED IN MUELLER, 2019:18)
EXPLICABLE AI IN EDUCATION
Summary
26
SUMMARY
EXPLICABLE AI IN EDUCATION
• The influence of AIED is increasing, but in the rush to market important ethical
aspects are overlooked.
• In educational contexts, it should always be possible to provide accounts of
AIED which are interpretable to the layperson alongside more technical
accounts which can be made available to specialist auditors or external
examiners.
• Furthermore, appropriate governance measures need to be put in place so
that it is always possible to identify a human being who takes responsibility for
what an algorithm has done or recommended (cf. Floridi et al., 2018).
• There are good arguments for making XAIED the default expectation for
AIED. Greater transparency and explicability is a route to critical reflection
upon the application of algorithms in education (XAIED) and in social life more
generally.
27
SUMMARY
EXPLICABLE AI IN EDUCATION
• The only viable route to ethical AIED is through inclusion and openness but
this drive towards transparency can be in tension with other aspects of the
learning process.
• It is also necessary to acknowledge that radical transparency is disruptive to
traditional pedagogical approaches, and introduces some risks (such as
algorithmic manipulation; bias; modifying rather than measuring behaviour;
and disincentivizing learning).
• Exemptions may be pedagogical or pastoral, or for other contextual reasons.
Crucially, even with exemptions it should be possible to explicate these
aspects of AIED systems. To ensure this, AIED should undergo regular
expert auditing (including exemptions).
• We are likely to see the emergence of new roles and training processes:
• Cross-training between machine learning techniques and ethical
expertise
• AI learning specialists
• Algorithmic presentation specialists
• Explicability auditors
Special Issue:
Instituting socio-technical education futures:
Encounters for technical democracy, data justice,
and post-automation
THANK YOU
rob.farrow@open.ac.uk
@philosopher1978

Explicable Artifical Intelligence for Education (XAIED)

  • 1.
    Explicable Artifical Intelligencefor Education (XAIED) DR. ROBERT FARROW ROB.FARROW@OPEN.AC.UK CALRG ANNUAL CONFERENCE 2022
  • 2.
    2 Explainable AI (XAI)+ Artificial Intelligence in Education (AIED) = Explainable Artificial Intelligence in Education (XAIED)
  • 3.
    3 WHAT IS IT? ARTIFICIALINTELLIGENCE • The mechanical simulation of human agency, intelligence, and perception • The use of machines to perform tasks that have traditionally been performed by natural intelligence • Involves a constellation of technologies, including machine learning; natural language processing; speech recognition
  • 4.
    4 THE VISION ARTIFICIAL INTELLIGENCE •Predicted to disrupt human society and productivity as a ‘4th Industrial Revolution’ (Schwab, 2016) • Solutions to problems: repetitive tasks; managing risks; increase affordability; innovation; accessibility; efficiencies; enhanced cognition • Market expected to be worth $126 billion by 2025 (Statista, 2020) CC-BY Emily Spratt https://en.m.wikipedia.org/wiki/File:Alain_Passard_AI_Art.png
  • 7.
    7 WHAT DOES THEFUTURE HOLD? AI IN EDUCATION The application of AI in education is increasing… According to the AI in Education Market Research Report (2020), the global market reached $1.1 billion in 2019 and is predicted to generate $25.7 billion in 2030 Thousands of institutions are already using AI technologies to shape and plan their delivery of education (Zawacki-Richter et al., 2019; Luckin et al., 2016; Dignum, 2021) Drivers: • Demand for personalised learning • Technological sophistication • Educational infrastructure • Specialisation • AI literacy • Covid-19? (nb. Heaven, 2021; Everett, 2021)
  • 8.
    8 EXAMPLES OF AIIN EDUCATION AI IN EDUCATION Algorithmic decision making –who to enrol, how to support their learning Analytics – learning, social, emotional Automated assessment/feedback– quizzes, writing analytics, Delegation of administrative tasks – freeing up time for learning & teaching Knowledge management – making better use of data, making connections Nudge autonomy – prompting stakeholders to take actions at appropriate times Predictive analytics – modelling different scenarios Simulations & practical experience – authentic learning experiences Student support – AI tutoring, chatbots, accessibility VLE / UX – personalised interfaces
  • 9.
    9 TWO PHILOSOPHICAL ARGUMENTSAGAINST STRONG AI Turing, A. (1950). Computing Machinery and Intelligence. Mind, LXI, 236. Searle, John (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3 (3): 417–457 EXPLICABLE AI IN EDUCATION
  • 10.
    10 MACHINE LEARNING ANDGENERALISED AI EXPLICABLE AI IN EDUCATION Jang, E. (2022). All Roads Lead to Rome: The Machine Learning Job Market in 2022. Eric Jang. https://evjang.com/2022/04/25/rome.html
  • 11.
    11 ETHICAL ISSUES ANDEXPLICABILITY EXPLICABLE AI IN EDUCATION All algorithms in machine learning are black boxes to some extent (or at least ‘grey boxes’) Deep learning algorithms and neural networks recognise patterns over massive data sets, but reconstructing these is problematic In addition, outputs from ML systems also require interpretation Where does algorithmic accountability lie? CC BY https://commons.wikimedia.org/wiki/File:Blackbox3D-withGraphs.png
  • 12.
    12 ETHICAL ISSUES ANDEXPLICABILITY: SURVEILLANCE EXPLICABLE AI IN EDUCATION Many of the anticipated uses of AIED rely on the assumption that mass data collection and analysis will take place. This can include data about learner progress through a virtual learning environment; but may include tracking biometric data, taking voice samples, and using eye-tracking software (Luckin, 2016:34). Already there is considerable reliance on the use of controversial tracking technologies in proctoring and assessment (Coughlan et al., 2021). The scale and penetration of machine learning data collection can be unsettling: a recent study found that 146 of 164 EdTech products recommended, mandated or procured by governments during the Covid-19 pandemic harvested the data of millions of children (Human Rights Watch, 2022).
  • 13.
    13 ETHICAL ISSUES ANDEXPLICABILITY: BIAS EXPLICABLE AI IN EDUCATION Algorithmic bias has been the focus of many critiques of AI (e.g. Baker & Hawn, 2021; Birhane et al., 2022; Noble, 2018; Samuel, 2021; Wachter, 2022; Zuboff, 2019). CC BY NC SA https://www.flickr.com/photos/lwr/2222227513
  • 14.
    14 EXPLICABLE AI (XAI)AS ETHICAL RESPONSE Zawacki-Richter et al. (2019) suggest that ethics is weakly represented in contemporary discourse around AI Birhane et al. (2022) argue that although AI ethics is a rapidly growing field it cannot keep pace with the rapid development and rollout of AI systems into all parts of society, and as a result most work in this area is shallow The United Nations has called for a moratorium on the sale and use of AI on the basis of risk to human rights (United Nations, 2021). In the most recent recommendations made by the UN High Commissioner to member states there is a call to ban any applications that cannot be run in full compliance with human rights legislation (ibid., 15). The emerging consensus is that there needs to be adequate transparency and explicability for the use of algorithms (Floridi et al., 2018; Gunning et al., 2019; Kiourti et al., 2019; Panigutti et al., 2020). Explicability is intended to make it easier to reconstruct actions taken by AI programs and to show who might be responsible for consequences. Páez (2019): ‘The task ahead for XAI is thus to fulfil the double desiderata of finding the right fit between the interpretative and the black box model, and to design interpretative models and devices that are easily understood by the intended users.’ EXPLICABLE AI IN EDUCATION
  • 15.
    15 AI4PEOPLE ETHICAL FRAMEWORK Floridi,L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1 EXPLICABLE AI IN EDUCATION The AI4People initiative synthesizes 47 sets of guidelines to four traditional ethical principles and proposes one new AI-specific principle.
  • 16.
    16 APPLYING THE AI4PEOPLEETHICAL FRAMEWORK Beneficence • Education as a common good • Extending educational opportunity – but are some thereby excluded? • AI efficiencies may not improve pedagogical quality Non-maleficence • Managing risk, avoiding misuse (n.b. YouTube) • Mistakes happen at scale • What happens when things go wrong? Who has oversight? • Algorithms take time/data to calibrate – what happens to the life chances of those who pass through the system while this is happening? • Risk of devaluing human labour and contribution Autonomy • Balancing the rights and privacies of learners with the potential pedagogical benefits for them • Nudged rather than delegated autonomy • Processes for retrieving decision-making powers EXPLICABLE AI IN EDUCATION
  • 17.
    17 APPLYING THE AI4PEOPLEETHICAL FRAMEWORK AI IN EDUCATION Justice • Algorithmic system bias • How could we resolve competing claims to justice? • Preventing new harms • Can we reduce careful judgements to algorithms or decision trees? Explicability • “How does it work?” • “Who is responsible for the way it works?” • A route to building trust in AI
  • 18.
    18 CRITIQUE OF EXPLICABILITY EXPLICABLEAI IN EDUCATION • Introducing XAI features can add significantly to the cost of systems (Antoniadi et al., 2021) • Páez (2019) argues that explicability remains a vague and under-theorised term with no definitive meaning • Robbins (2020) argues that the requirement for explicability is misplaced on the basis that many uses for AI are low risk and don’t require explication; in some cases the need to provide explication could prevent the advantages of AI being realised (““a principle of explicability for AI makes the use of AI redundant”) • Jiang et al. (2022) argue that XAI can overwhelm and introduce epistemic uncertainty which can undermine productive use of AI systems • Gilpin et al. (2018) similarly argue that existing approaches to XAI are insufficient (especially in the case of deep neural networks) • Approaches to understanding explicability are typically siloed and would benefit from greater interdisciplinarity (Dignum, 2021) • Technical transparency is insufficient for complex decision making (which must be augmented by social, technological and organisational dimensions). Hence, a socially situated XAI needs to prioritise the human-AI assemblage over the AI itself (Ehsan et al. (2021)
  • 19.
    19 EXPLICABILITY AS INTERPRETABILITY/ FIDELITY EXPLICABLE AI IN EDUCATION This typology proposed by Markus et al. (2021) distinguishes interpretability which is human readable and fidelity which is the accurate, technical description of what happens in the ‘black box’. The technical explanation of an algorithm might include things like exploratory or statistical analysis; evaluation of machine learning models; periodic iterations of concepts and validation of results; user testing; and producing documentation for datasets and models. For the general stakeholders lacking expert knowledge such transparency presumably has limited value without a trusted broker who can interpret on their behalf.
  • 20.
    20 SOCIO-TECHNICAL AI RISKSAND AMELIORATIONS (BASED ON SELBST ET AL., 2018) EXPLICABLE AI IN EDUCATION
  • 21.
    21 PRACTICAL STEPS AI INEDUCATION Floridi et al. (2018) recommend action points for education: • Incentivise (through finance and regulation) zones for testing and developing AI • Support the creation of educational curricula and public awareness activities around the societal, legal, and ethical impact of AI • School curricula to include computer science • Qualification programmes to educate employees on societal, legal, & ethical impact of working with AI • Include ethics and human rights in scientific and engineering curricula • Develop educational programmes for the public at large • Engage with wider initiatives such as UN’s sustainable development goals Whittlestone et al. (2019) suggest a roadmap for going ‘beyond principles’: • Uncovering and resolving the ambiguity inherent in commonly used terms, such as privacy, bias, and explainability • Identifying and resolving tensions between the ways technology may both threaten and support different values • Building a more rigorous evidence base for discussion of ethical and societal issues Bulathwela et al. (2019) propose diversity and dialogue to ‘collectively design a global education revolution that will help us solve educational inequity’ by addressing the political and social context which engenders unequal access to quality education.
  • 22.
    22 XAITK TOOLKIT EXPLICABLE AIIN EDUCATION The Explainable AI Toolkit (XAITK) contains a variety of tools and resources to help users, developers, and researchers understand complex machine learning models. The toolkit combines a searchable repository of independent contributions and a more integrated, common software framework. The toolkit was developed under the Defense Advanced Research Projects Agency (DARPA) Explainable Artificial Intelligence (XAI) program.
  • 23.
    23 THE LIMITS OFTRANSPARENCY? EXPLICABLE AI IN EDUCATION • Pedagogial processes are not always transparent (especially to learners) – does XAI threaten to disrupt this? • ‘Intepretable’ looks different to different kinds of stakeholders – does this mean they need their own windows into algorithms? • What if learning is improved by not understanding the AI models? • Tong et al. (2021) found that AI feedback can be of higher quality but also perceived negatively by learners • Protection of algorithms on the basis of commerical interest – but note that even trade secrets can be publicly audited under the right protocols (Morten, 2022) • More broadly, any exemption from transparency is in tension with the concept of XAIED
  • 24.
    24 FOUR CHALLENGES FORXAI (BASED ON FELTEN, 2017, CITED IN MUELLER, 2019:18) EXPLICABLE AI IN EDUCATION
  • 25.
  • 26.
    26 SUMMARY EXPLICABLE AI INEDUCATION • The influence of AIED is increasing, but in the rush to market important ethical aspects are overlooked. • In educational contexts, it should always be possible to provide accounts of AIED which are interpretable to the layperson alongside more technical accounts which can be made available to specialist auditors or external examiners. • Furthermore, appropriate governance measures need to be put in place so that it is always possible to identify a human being who takes responsibility for what an algorithm has done or recommended (cf. Floridi et al., 2018). • There are good arguments for making XAIED the default expectation for AIED. Greater transparency and explicability is a route to critical reflection upon the application of algorithms in education (XAIED) and in social life more generally.
  • 27.
    27 SUMMARY EXPLICABLE AI INEDUCATION • The only viable route to ethical AIED is through inclusion and openness but this drive towards transparency can be in tension with other aspects of the learning process. • It is also necessary to acknowledge that radical transparency is disruptive to traditional pedagogical approaches, and introduces some risks (such as algorithmic manipulation; bias; modifying rather than measuring behaviour; and disincentivizing learning). • Exemptions may be pedagogical or pastoral, or for other contextual reasons. Crucially, even with exemptions it should be possible to explicate these aspects of AIED systems. To ensure this, AIED should undergo regular expert auditing (including exemptions). • We are likely to see the emergence of new roles and training processes: • Cross-training between machine learning techniques and ethical expertise • AI learning specialists • Algorithmic presentation specialists • Explicability auditors
  • 28.
    Special Issue: Instituting socio-technicaleducation futures: Encounters for technical democracy, data justice, and post-automation
  • 29.

Editor's Notes

  • #4 The mechanical simulation of human agency, intelligence, and perception The use of machines to perform tasks that have traditionally been performed by natural intelligence
  • #5 CC-BY Emily Spratt https://en.m.wikipedia.org/wiki/File:Alain_Passard_AI_Art.png
  • #6 AI regularly hits the headlines these days – but has the quality of AI improved?
  • #8 Abu Dhabi has the world’s first AI university
  • #9 Other examples?
  • #10 Turing Test and Chinese Room task-based, hence weak AI Information processing is not equal to understanding Based on functionalism in philosophy of mind Key question – does consciousness matter when it comes to intelligence?
  • #16 Bioethics because it is ‘applied’ and considered closer to digital, medical ethics than traditional ethics
  • #18 Noble, Safiya Umoja (2018). Algorithms of oppression : how search engines reinforce racism. New York University Press. 
  • #22 https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. Dihal, K. Cave, S. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf