Präsentation von Stréphan Vincent-Lancrin, stv. Leiter der OECD-Abteilung für Innovation in der Bildung, im Rahmen eines Webinars von OECD Berlin Centre und Konrad-Adenauer-Stiftung am 2. Oktober 2020.
What framework for a responsible use of AI in education?
What framework for a responsible use of
AI in education?
Stéphan Vincent-Lancrin, Ph.D.
Deputy Head of Division,
Senior Analyst and Project Leader
Centre for Educational Research and Innovation,
Directorate for Education and Skills
Berlin, 02 October 2020
• Inclusive growth, sustainable development
• Human-centred values and fairness
• Transparency and explainability
• Robustness, security and safety
OECD (and G20) Principles on Artificial Intelligence
– Develop solutions with stakeholders (teachers, etc.) – not just EdTech
– Work with schools on the benefits of the technological solution so it gets used
– Verify that AI solutions do what they say (e.g. gives accurate diagnosis/predictions)
– Ensure it improves outcomes (e.g. supports interventions to solve the problems)
– Privilege cheap solutions running on existing platforms (digital divide is bigger than we
– Establish standards and facilitate inter-operability
What does that mean in education?
– Ensure that you are not replicating biases due to your historical data (e.g. machine
learning) or due to the human choices in designing the algorithm
– Ensure that you are not creating new biases (e.g. look at the results)
– Open data/open algorithm: allow anyone (i.e. other experts) to see and
verify/challenge/improve the algorithm
– Explain how the algorithm works and which choices were made (to the extent possible)
– Involve stakeholders to discuss the choices made when there is high stake
• Data protection
– Data protection regulation in most countries: GDPR in Europe, FERPA in the US, etc.
– Risk management policy: zero risk policy is not possible
What does it mean in education?
– Are there (possible) benefits we do not
want because of the risks of misuse of
the data – or just the solution is too
– Should AI give feedback, support,
diagnose – or make decisions?
– Constant monitoring/tracking
(surveillance?), bio-markers, etc.?
• Can we have creative solutions?
– Delete data as they are collected?
– Invest in plausible theories?
– Build trust
• Ethics and regulation should allow us
to do something…
Ethics or regulation?