Presentation (with Eamon Costello) from the Global Smart Education Conference (The 6th International Conference on Smart Learning Environments), Beijing National University, China.
The presentation explores issues in AI driven learning systems and implications of machine learning approaches for inclusion and access to education.
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Open Mining Education, Ethics & AI
1. Open Mining
Education, Ethics
& AI
Dr Eamon Costello1 & Dr Rob Farrow2
1 Dublin City Univeristy, Ireland
2 The Open Univeristy, United Kingdom
2. The AI Gold Rush:
Whose educational futures are
being served?
3. What is AI?
• The mechanical simulation of human agency,
intelligence, and perception
• The use of machines to perform tasks that have
traditionally been performed by natural
intelligence
• Involves a constellation of technologies, including
machine learning; natural language processing;
speech recognition
4.
5. This image (Jang 2022) represents Solutionist approaches to general
AI which rests on a number of psychologically reductive and
philosophically suspect foundations
Hyperbolic AGI
6. Surveillance and AI
Many of the anticipated uses of AIED rely on the assumption that
mass data mining and analysis will take place.
This can include data about learner progress through a virtual
learning environment; but may include tracking biometric data,
taking voice samples, and using eye-tracking software (Luckin,
2016:34).
Already there is considerable reliance on the use of controversial
tracking technologies in proctoring and assessment (Coghlan et
al., 2021).
The scale and penetration of machine learning data collection
can be unsettling: a recent study found that 146 of 164 EdTech
products recommended, mandated or procured by
governments during the Covid-19 pandemic harvested the
data of millions of children (Human Rights Watch, 2022).
7. Algorithmic Bias
Algorithmic bias has been the focus of
many critiques of AI (e.g. Baker & Hawn,
2021; Birhane et al., 2022; Noble, 2018;
Samuel, 2021; Wachter, 2022; Zuboff,
2019).
8. “Images that are commonly used today often
misrepresent the technology, reinforce
harmful stereotypes and spread misleading
cultural tropes”.
https://betterimagesofai.org/about
AI’s Image Problem
9. AI’s Image problem
Represent a wider range of humans and human cultures than
‘caucasian businessperson’
Represent the human, social and environmental impacts of AI
systems
Reflect the realistically messy, complex, repetitive and statistical
nature of AI systems
Accurately reflect the capabilities of the technology; it is
generally applied to specific tasks, it is not of human-level
intelligence and does not have emotions
Show realistic applications of AI now, not in some unspecified
science-fiction future
Don't show physical robotic hardware where there is none
Avoid monolithic or unknowable representations of AI systems
Don't show electronic representations of human brains
Constitute a wider variety of ways to depict different types, uses,
sentiments, and implications of AI
https://betterimagesofai.org/about
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images
of AI / Data flock (faces) / CC-BY 4.0
Nacho Kamenov & Humans in the Loop /
Better Images of AI / Data annotators labeling
data / CC-BY 4.0
10. AI Explicability as Interpretablity
CC BY https://commons.wikimedia.org/wiki/File:Blackbox3D-withGraphs.png
All algorithms in machine learning are black boxes to some extent (or at least ‘grey boxes’)
Deep learning algorithms and neural networks recognise patterns over massive data sets, but reconstructing these is problematic
In addition, outputs from ML systems also require interpretation
Where does algorithmic accountability lie?
11. AI4People Ethical
Framework
The AI4People initiative
synthesizes 47 sets of
guidelines to four traditional
ethical principles and
proposes Explicability as a
new AI-specific principle
(Floridi & Cowls, 2019)
12. AI Explicability as Interpretablity
This typology proposed by
Markus et al. (2021) distinguishes
interpretability which is human
readable and fidelity which is the
accurate, technical description of
what happens in the ‘black box’.
Can we expect learners to
understand these processes
and the effects for their
learning?
For the general stakeholders
lacking expert knowledge such
transparency presumably has
limited value without a trusted
broker who can interpret on
their behalf.
Explainability
Fidelity (Accurate
description of
tasks)
Interpretability
(Human
comprehensibility)
Clarity (rationale)
Parsimony
(conciseness)
Completeness
(input-ouput
reporting)
Soundness
(truthful to task
model)
13. Four Challenges for
Explicable AI
Based on Feldman, cited
in Mueller, S.T., Hoffman,
R.R. , Clancey, W., Emrey,
A. and Klein, G. (2019)
XAI Challenge Description Normative aspect(s)
Confidentiality An algorithm may be confidential for
reasons of competitive edge or trade
secrecy; or as a matter of public
security
Can create structural inequality
through automated decision
processes but hard to identify
biases when algorithms are
legally protected
Complexity Some algorithms are clearly
understood by experts, but their
complexity cannot easily be
communicated to the layperson
XAI can aspire to create/develop
algorithms which are more
easily understood by non-
specialists
Unreasonableness Algorithms might use rationally
justifiable decisions to implement
decisions or actions which are unfair or
discriminatory
Algorithmic bias must be
addressed and monitored
Injustice Algorithms may be understood in their
operation but the legal and/or moral
consequences also need to be
explicated
Explication of justice related
dimensions
14. “A data justice approach is one that centres on equity,
recognition and representation of plural interests, and
the creation and preservation of public goods as its
principal goals.”
(Lopez et al 2022)
AI & Data Justice
Open Education and social justice
(lambert & Czerniewicz 2020)
AI and data/algorithmic justice
(Birhane 2021)
15. Preserving and strengthening public infrastructure and public
goods
Inclusiveness
Contestability and accountability
Global responsibility.
(Lopez et al 2022)
AI & Data Justice:
Benchmarks
16. “Only 11 countries currently have government-endorsed K-
12 AI curricula (none of which were in Africa), and only around
12% of time in these curricula was allocated to the social
and ethical implications of AI, and there are few evaluations
on the quality of AI curricula and the effectiveness of the
implementation of the curricula. Local curricula are especially
important given that countries share similarities but can be
radically different from one another.”
Miao & Holmes (2022)
AI in Education
17. Summary
People underlie (and overlay) all technology.
We are all already using AI - but how will AI be used in the future?
How and what will people learn about AI? Will they need to understand
what is ‘subterranean’?
Will there be a new class of intermediaries or brokers to interface
between human and machine?
In the gold-rush of AI expansion the disruptive impact of AI in education
is already being felt, but is hardly understood
How will educators and learners reach informed perspectives? This is
unclear at present
Many of the claims made about AI are hyperbolic, overly simplistic
Solutionism
The human cost in training AI systems is generally hidden from sight
18. References
Baker, R. S., & Hawn, A. (2021). Algorithmic Bias in Education. https://doi.org/10.35542/osf.io/pbmvz
Birhane, A. (2021). Algorithmic injustice: a relational ethics approach. Patterns, 2(2), 100205.
Birhane, A., Ruane, E., Laurent, T., Brown, M. S., Flowers, J., Ventresque, A., Dancy, C. L. (2022). The Forgotten Margins of AI Ethics. FAccT '22: Proceedings of the 2021 ACM Conference on Fairness,
Accountability, and Transparency (forthcoming). https://doi.org/10.1145/3531146.3533157 / https://arxiv.org/abs/2205.04221v1
Coghlan, S., Miller, T. and Paterson, J. (2021). Good Proctor or “Big Brother”? Ethics of Online Exam Supervision Technologies. Philosophy and Technology, 34, 1581–1606.
https://doi.org/10.1007/s13347-021-00476-1
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
Human Rights Watch (2022). “How Dare They Peep into My Private Life?” Children’s Rights Violations by Governments that Endorsed Online Learning During the Covid-19 Pandemic. Human
Rights Watch. https://www.hrw.org/report/2022/05/25/how-dare-they-peep-my-private-life/childrens-rights-violations-governments
Lambert, S. and Czerniewicz, L., 2020. Approaches to Open Education and Social Justice Research. Journal of Interactive Media in Education, 2020(1), p.1. DOI: http://doi.org/10.5334/jime.584
Lopez et al 2022)
Jang, E. (2022). All Roads Lead to Rome: The Machine Learning Job Market in 2022. Eric Jang. https://evjang.com/2022/04/25/rome.html
Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016). Intelligence Unleashed. An argument for AI in Education. London: Pearson. https://discovery.ucl.ac.uk/id/eprint/1475756/
Miao, F., & Holmes, W. (2022). International Forum on AI and Education: Ensuring AI as a Common Good to Transform Education, 7-8 December; synthesis report.
https://discovery.ucl.ac.uk/id/eprint/10146850/1/381226eng.pdf
Mueller, S.T., Hoffman, R.R. , Clancey, W., Emrey, A. and Klein, G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for
explainable AI [Preprint]. DARPA XAI Literature Review. February 9. https://arxiv.org/pdf/1902.01876.pdf.
Noble, S. U. (2018). Algorithms of Oppression. NYU Press.
Samuel, S. (2021). AI’s Islamophobia problem. Vox. https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim
Wachter, S. (forthcoming) The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law (February 15, 2022). Tulane Law Review. Available at SSRN:
https://ssrn.com/abstract=4099100 or http://dx.doi.org/10.2139/ssrn.4099100
Zuboff, S. (2019). The Age of Surveillance Capitalism. Public Affairs Books.