1. Whose ethics? Whose AI?
A relational approach to the
challenge of ethical AI
A L T W I N T E R S U M M I T 2 0 2 3 : E T H I C S A N D A I
Helen Beetham
@helenbeetham
helenbeetham.substack.com
2. Whose ethics?
CC
0
Public
Domain
via
wikimedia
commons
‘While ethics codes exist,
they may not be embedded
within all generative AI tools
and their incorporation, or
otherwise, may not be
something that users can
easily verify.’
Russell Group: ‘Principles on the Use of AI in Education’, July 2023
4. Whose ethics?
“Users”
All students and staff understand the
opportunities, limitations and ethical
issues associated with the use of
these tools and can apply what they
have learned as the capabilities of
generative AI develop…
The school or educator is able to
formulate some relevant questions
and engage in a constructive dialogue
with AI systems providers or with the
responsible public bodies …
5. Whose ethics?
• Think critically and consider
the wider environment
• Recognise responsibilities
and influence beyond your
institution
• Care of self and others
• Be accountable and
prepared to explain decisions
• Recognise possibility of bias
Association
for
Learning
Techonologies
2022
6. Relational ethics
• Relationships at the core
• Understanding context and ecosystem
• Asking the right questions
“in order to understand how to be ethical we
need to understand the dynamic, interwoven
contexts and relationships within which
[innovation] is designed and deployed”
From the Centre for
Technomoral Futures and the
UNICEF Data for Children
Collaborative
7. The principle of positionally
“to pay attention to positionality,
reflexivity, and how this shapes the
production of knowledge...
Farhana Sultana (2007) via the Equality
Institute
CC
BY-SA
2.0
Rama
via
wikimedia
commons
8. Pei Wang (2019) review of ‘The concept
of Artificial Intelligence’ in the Journal of
General Artificial Intelligence
“Every working definition of AI corresponds
to an abstraction that describes the mind
from a certain point of view…This
abstraction guides the construction of a
computer system that is [meant to be]
similar to a human mind in that sense,
while neglecting other aspects of the human
mind as irrelevant.”
Whose AI?
9. Herbert Simon on the ‘Logic
Theorist’ programme, 1956
‘We believe that we can start with
some of the most advanced human
activities—i.e. proving theorems—
and work back to the “simplest”’
Simon
and
Newell
playing
chess,
image
unt.univ-cotedazur.fr
Whose AI?
14. Whose (generative) AI?
“The wealthiest companies in history
unilaterally seizing the sum total of
human knowledge that exists in
digital, scrapable form and walling it
off inside proprietary products”
Naomi Klein, 2023
And why I prefer the term ‘synthetic media’:
The statistical modelling and re-synthesis of language, images, music, video,
data, and other digital records of human communications and cultural meanings
15. How synthesis works
1. Original
‘training’ data or
corpus: human
authored text
2. Training process
- model engineers
continually adjust
parameters over
multiple training runs
3. Diverse forms of
human re
fi
nement, from
labelling to research and
demonstrator texts
4. User prompts call
and re
fi
ne inferences:
reused as training data
Central
image
from
deciAI
via
substack,
annotations
HB
17. How synthesis makes us all more
productive
What is “productivity” in learning
(and in teaching, and in research)?
Who benefits? At the expense of what, or who?
21. Environmental impact
“generating an image using a powerful AI
model takes as much energy as fully
charging your smartphone”
MIT Technology Review December 2023
Inference requires 4-10x the compute
when compared with indexed search
Stanford AI Index Report 2023
22. The knowledge ecology
Luke Munn, 2023
“The designer of the system holds the power
to decide what the truth of the world will
be…What will be left for higher education
when ChatGPT and other emerging LLMs
have become de facto arbiters of truth?”
23. “Skills humans need”
• Concedes agency to probabilistic systems
• Creates new divisions of intellectual labour,
value and reward among people
• De
fi
nes ‘human’ and ‘intelligence’ universally
• De
fi
nes ‘human’ as whatever ‘technology’ is
not (yet)
• Invests in a particular version of ‘the future’…
• … that has no future
24. AI ‘literacy’ of questioning
• How are outputs synthesised (really)?
• Who pro
fi
ts? Who is exploited or
excluded? Who is not represented?
• What is the environmental impact?
• How do models enhance bias,
inequality, and privatisation, as well as
improving access and productivity?
•What are the risks to human creative
and intellectual work in different
scenarios of widespread use?
• Whose work / knowledge should be
valued and why?
(C) Dominika Zarzycka, used with photographer’s permission
25. Education systems categorised by the EU as ‘high risk’
• Adequate risk assessment and
mitigation
• High quality datasets to
minimise risks and biases
• Full record to ensure traceability
and accountability
• Appropriate human oversight
• Robustness, security, accuracy
Building an ecosystem for agency
and care
Image: Ada Lovelace Institute