Be the first to like this
On 28 May 2019 Ms. Nathalie Smuha (KULeuven and EU Commission DG Connect) presented on the European strategy with regards to Artificial Intelligence, which includes assembling a high-level group of experts on AI with a double mission: (1) draft guidelines for Trustworthy AI and (2) draft recommendations in support of policy and investments.
The second half of the presentation was focused on the guidelines for Trustworthy AI which were published in a first final version in April 2019. The guidelines are layered in a way that each layer builds upon the other.
- level 0 (foundation): AI should be lawful, ethical and robust
- level 1 (principles): AI should respect human autonomy, prevent harm, be fair and be explicable.
- level 2 (requirements): AI should meet requirements linked to 7 groups: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental well-being, and (7) accountability.
- level 3 (questions): AI developers and deployers should ask themselves a number of questions. The high-level expert group has worked out 131 questions to guide practical implementation of trustworthy AI. Theses questions are subject to a practice test, namely YOU can try them out and give the expert group feedback.
This framework compares to other frameworks like the ones in Japan, Canada, Singapore, Dubai, ... and the one from the OECD (published in May 2019).