Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

20190528 - Guidelines for Trustworthy AI


Published on

On 28 May 2019 Ms. Nathalie Smuha (KULeuven and EU Commission DG Connect) presented on the European strategy with regards to Artificial Intelligence, which includes assembling a high-level group of experts on AI with a double mission: (1) draft guidelines for Trustworthy AI and (2) draft recommendations in support of policy and investments.
The second half of the presentation was focused on the guidelines for Trustworthy AI which were published in a first final version in April 2019. The guidelines are layered in a way that each layer builds upon the other.
- level 0 (foundation): AI should be lawful, ethical and robust
- level 1 (principles): AI should respect human autonomy, prevent harm, be fair and be explicable.
- level 2 (requirements): AI should meet requirements linked to 7 groups: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental well-being, and (7) accountability.
- level 3 (questions): AI developers and deployers should ask themselves a number of questions. The high-level expert group has worked out 131 questions to guide practical implementation of trustworthy AI. Theses questions are subject to a practice test, namely YOU can try them out and give the expert group feedback.

This framework compares to other frameworks like the ones in Japan, Canada, Singapore, Dubai, ... and the one from the OECD (published in May 2019).

Published in: Education
  • Be the first to comment

  • Be the first to like this

20190528 - Guidelines for Trustworthy AI

  1. 1. Ethics Guidelines for Trustworthy AI Nathalie Smuha KU Leuven, Faculty of Law (Dept. International & European Law) Coordinator of the High-Level Expert Group on AI, European Commission
  2. 2. Overview 1. Background 2. Ethics 101 3. Ethics Guidelines for Trustworthy AI 4. Policy & Investment Recommendations 5. Next steps 6. Conclusions
  3. 3. Background What do we know? • AI technology is moving incredibly fast • Challenge for regulators • Impact of AI is multifold & not yet fully understood • Legal, ethical, social, economical… • AI is context-specific • Opportunities and challenges may differ for different each sectors / applications • We don't have all the answers • Humility & further research needed • Flexibility / adaptability of regulatory models needed • Interdisciplinary & multi-stakeholder approach is key
  4. 4. Background EU STRATEGY ON ARTIFICIAL INTELLIGENCE published in April 2018 Boost AI uptake Tackle socio- economic changes Ensure adequate ethical & legal framework In this context: appointment of Independent High-Level Expert Group on Artificial Intelligence (AI HLEG) in June 2018
  5. 5. Background COORDINATION WITH MEMBER STATES Communication of 7 December 2018 70 Actions in wide range of domains Updated on a yearly basis Legal & Ethical Framework as one of the areas
  6. 6. Background Third Pillar: Legal & Ethical Framework • Acknowledging ethical concerns & establishing harmonized ethical framework  High-Level Expert Group on Artificial Intelligence (AI HLEG): AI Ethics Guidelines • Evaluating fitness of existing regulatory frameworks: Liability (PLD) & Safety (MD) in particular  Expert Group on Liability & Future of Work  AI HLEG to draft policy recommendations (holistic approach) • Stakeholder participation: European AI Alliance  Input gathering to inform EU policymaking & the AI HLEG from wide range of stakeholders through online platform • Beyond Europe  Interacting with other institutions (CoE, OECD, UNESCO…)  Interacting with other Countries (Japan, Canada, Singapore…)
  7. 7. Ethics 101 • Basic distinctions: • Applied ethics v normative ethics v meta-ethics • Social ethics v individual ethics • Ethics or Practical Philosophy v Theoretical Philosophy • Main ethical theories: • Deontologism • Virtue ethics • Consequentialism (Teleological ethics)
  8. 8. Ethics 101 • The Trolley Problem
  9. 9. Ethics 101 • The Trolley Problem revisited • Moral Machine Experiment
  10. 10. Ethics 101 • Which ethics? • Ethical relativism o Cfr. Moral machine experiment • Moral universalism o Fundamental Rights? • Ethical scepticisim • Ethics v Law
  11. 11. High-Level Expert Group and mandate Industry Academia Civil society Chair: Pekka Ala-Pietilä • Ethics Guidelines for Artificial Intelligence • Policy & Investment Recommendations 52 members from: Two deliverables Interaction with European AI Alliance • Broad multi-stakeholder platform counting over 3000 members to discuss AI policy in Europe
  12. 12. Ethics Guidelines for AI – Process 18 December 2018 First draft published December 2018- February 2018 • Open consultation • Discussion with Member States • Discussion on the European AI Alliance March 2019 Revised document delivered to the Commission April 2019 Final document published & welcomed through Commission Communication
  13. 13. Ethics Guidelines for AI – Intro Lawful AI Three levels of abstraction Ethical AI Robust AI Trustworthy AI as our foundational ambition, with three components Human-centric approach: AI as a means, not an end from principles (Chapter I) to requirements (Chapter II) to assessment list (Chapter III)
  14. 14. Ethics Guidelines for AI – Principles 4 Ethical Principles based on fundamental rights Respect for human autonomy Prevention of harm Fairness Explicability
  15. 15. Technical Robustness and safety Transparency Privacy and data governance Human agency and oversight Diversity, non- discrimination and fairness Ethics Guidelines for AI – Requirements Societal & environmental well-being Accountability To be continuously implemented & evaluated throughout AI system’s life cycle
  16. 16. Ethics Guidelines for AI – Assessment List Official launch of piloting: 26 June – Stakeholder event Assessment list to operationalise the requirements • Practical questions for each requirement – 131 in total • Test through piloting process to collect feedback from all stakeholders (public & private sector) • “Quantitative” analysis track -> open survey • “Qualitative” analysis track -> in depth interview
  17. 17. Ethics Guidelines for AI – Opportunities & Concerns Beneficial opportunities, e.g. • Climate change and sustainable infrastructure • Health and well-being • Quality education and digital transformation Critical concerns, e.g. • Identifying and tracking individuals • Covert AI systems • Citizen scoring • LAWS • Potential long term concerns
  18. 18. Policy & Investment Recommendations Second deliverable: different audience (Commission & Member States) • Ensuring Europe’s competitiveness and policies for Trustworthy AI • Looking at key impacts and enablers • Document to be presented at stakeholder event on 26 June 2019 • After the summer: recommendations for strategic sectors
  19. 19. Next steps  26 June: Presentation Recommendations & Kick-off Piloting  Feedback gathering on assessment list from July till December 2019  Revised version assessment list & sectorial recommendations in 2020  Commission will then decide on Next Steps  Self-regulation / Self-certification?  Standardisation?  Sectorial Guidelines?  Regulation?
  20. 20. Conclusions  Fast-moving topic: Regulatory Humility -> Be smart!  Close link between research and legislation: solutions cannot be found in isolation  Balance between Protection & Innovation  Balance between Ethics & Regulation  To be continued…
  21. 21. Thank you Icons made by Freepik from Questions?