Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

What framework for a responsible use of AI in education?

452 views

Published on

Präsentation von Stréphan Vincent-Lancrin, stv. Leiter der OECD-Abteilung für Innovation in der Bildung, im Rahmen eines Webinars von OECD Berlin Centre und Konrad-Adenauer-Stiftung am 2. Oktober 2020.

Published in: Education
  • Be the first to comment

  • Be the first to like this

What framework for a responsible use of AI in education?

  1. 1. What framework for a responsible use of AI in education? Stéphan Vincent-Lancrin, Ph.D. Deputy Head of Division, Senior Analyst and Project Leader Centre for Educational Research and Innovation, Directorate for Education and Skills Berlin, 02 October 2020
  2. 2. AI on the rise
  3. 3. 12.6 6.1 3.1 0.6 1.8 0.8 1.3 0.1 0 2 4 6 8 10 12 14 AR/VR AI Robotics Blockchain 2025 2018 Current and estimated expenditures in advanced education technology Advanced Education Technology Expenditure, 2018 and 2025 estimate, USD Billions Source: HolonIQ, January 2019
  4. 4. China represents over 50% of global education venture capital investment 0.6 2.0 1.6 2.3 5.2 1.0 1.5 1.1 1.3 1.6 0.4 0.7 0.3 0.5 0 1 2 3 4 5 6 7 8 9 2014 2015 2016 2017 2018 China Unied States India European Union Others 1.8 4.2 3.2 4.4 8.2 33% 48% 50% 52% 63% 56% 36% 34% 30% 20% 3% 2 6% 9% 9% 4% 2% 3% 7% 6% 4% 12% 6% 2% 2% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 2014 2015 2016 2017 2018 China Unied States India European Union Others Venture capitalists have invested USD 8B in 2018, up from USD 2B in 2014 – mainly from China Source: HolonIQ, January 2019
  5. 5. a few key points
  6. 6. • Inclusive growth, sustainable development and well-being • Human-centred values and fairness • Transparency and explainability • Robustness, security and safety • Accountability OECD (and G20) Principles on Artificial Intelligence
  7. 7. • Usefulness – Develop solutions with stakeholders (teachers, etc.) – not just EdTech – Work with schools on the benefits of the technological solution so it gets used • Effectiveness – Verify that AI solutions do what they say (e.g. gives accurate diagnosis/predictions) – Ensure it improves outcomes (e.g. supports interventions to solve the problems) • Equity – Privilege cheap solutions running on existing platforms (digital divide is bigger than we thought) – Establish standards and facilitate inter-operability What does that mean in education?
  8. 8. • Fairness – Ensure that you are not replicating biases due to your historical data (e.g. machine learning) or due to the human choices in designing the algorithm – Ensure that you are not creating new biases (e.g. look at the results) • Transparency – Open data/open algorithm: allow anyone (i.e. other experts) to see and verify/challenge/improve the algorithm – Explain how the algorithm works and which choices were made (to the extent possible) – Involve stakeholders to discuss the choices made when there is high stake • Data protection – Data protection regulation in most countries: GDPR in Europe, FERPA in the US, etc. – Risk management policy: zero risk policy is not possible What does it mean in education?
  9. 9. • Questions – Are there (possible) benefits we do not want because of the risks of misuse of the data – or just the solution is too intrusive? – Should AI give feedback, support, diagnose – or make decisions? – Constant monitoring/tracking (surveillance?), bio-markers, etc.? • Can we have creative solutions? – Delete data as they are collected? – Invest in plausible theories? – Build trust • Ethics and regulation should allow us to do something… Ethics or regulation?
  10. 10. Stephan.Vincent-Lancrin@oecd.org stephan-vincent-lancrin THANK YOU https://oe.cd/educationceriinnovationstrategy

×