The THEMIS 5.0 project engages users through AI-driven interactive dialogues and helps them assess how trustworthy they think a particular AI decision is.
2. KUL & IPT draft contribution to D2.1.
1. State-of-the-art outline [focus on KUL contribution]
1.1.Ethical and legal frameworks for Trustworthy AI (2.x.)
1.2. Trustworthiness criteria: philosophical, ethical and legal relevance
of fairness, robustness and accuracy (2.3.7.)
2. What’s next?
3. List of references
3. 1.1. STATE OF THE ART OUTLINE -
ETHICAL AND LEGAL FRAMEWORKS
FOR TRUSTWORTHY AI (2.X.)
3
4. Content & objective
Themis 5.0 technology is per se AI-driven ->
the technology needs to be designed in adherence to the ethical and
legal requirements of Trustworthy AI, applicable in the EU.
The first section of our contribution is dedicated to the identification
of the ethical and legal frameworks that Themis 5.0. technology
needs to take into consideration.
Then, the legal and ethical impact assessment of the technology will
be carried out in T1.4
1.1. ETHICAL AND LEGAL FRAMEWORKS FOR
TRUSTWORTHY AI
5. 5
1.1. ETHICAL AND LEGAL FRAMEWORKS
FOR TRUSTWORTHY AI
Main Takeaways
• AI governance entails the overarching management of practices concerning
artificial intelligence, extending beyond legal frameworks to encompass a
broad spectrum of regulatory and ethical considerations.
• The role of ethics and soft law is prominent in the present early stage of AI
legal rules’ development worldwide.
6. 6
1.1. ETHICAL AND LEGAL FRAMEWORKS
FOR TRUSTWORTHY AI
Main Takeaways
• Changing and blurred frameworks
• The focus on the EU approach to AI is complemented with references to the
other main AI governance initiatives undertaken by i) international
and regional fora where the EU operates ii) cooperation efforts with
the US iii) the UK
7. 7
1.1. ETHICAL AND LEGAL
FRAMEWORKS FOR TRUSTWORTHY AI
Main Takeaways
• Major attention to the AI Act -> the first horizontal regulation specifically tailored for AI in
the world.
However ->
• There is an agreement on the final text, but the legislative procedure is in the final stage ->
Publication of the official text will take place in Spring 2024, after the final vote of the EU
Parliament and endorsement of the Council
• The Commission will have to issue several Implementing Regulations and delegated acts
• The application will take place mainly within 2 years from the entry into force with some
exceptions, including prohibited AI systems in 6 months; Codes of practices in 9 months;
classifications of some high-risk AI systems and consequent obligations in 3 years
For more info on the finalization of the legislative process see: EU countries
give crucial nod to first-of-a-kindArtificial Intelligence law – Euractiv
8. 8
1.1. ETHICAL AND LEGAL FRAMEWORKS
FOR TRUSTWORTHY AI
Main Takeaways
In the meantime:
• given the risk-based approach to regulation adopted by the AI Act -> it is
crucial to understand in which categories of risks THEMIS 5.0. will be placed
• Focus on existent regulatory instruments, e.g. GDPR
• Still prominent role of AI HLEG Ethics Guidelines and ALTAI
9. 1.1. ETHICAL AND LEGAL FRAMEWORKS
FOR TRUSTWORTHY AI
Tool to navigate the final draft of the
text: The AI Act Explorer | EU
Artificial Intelligence Act
(keeping in mind that it is an
unofficial source)
9
12. 12
1.2. ETHICAL AND LEGAL RELEVANCE OF FAIRNESS,
ROBUSTNESS AND ACCURACY
Content & objective
Themis 5.0. technology will provide the optimization of third-party AI systems'
specific trustworthiness requirements (i.e. fairness, accuracy, robustness),
whose understanding and implementation need to be legally and ethically
compliant.
The section aims to
• to provide philosophical (IPT), ethical and legal explanations
of fairness, accuracy, and robustness
• to clarify why they have ethical and legal relevance in Themis 5.0.
13. 13
1.2. ETHICAL AND LEGAL RELEVANCE OF
FAIRNESS, ROBUSTNESS AND ACCURACY
At the outset
• According to the current EU framework, Trustworthy AI
involves more dimensions and requirements than just fairness, robustness
and accuracy.
• Therefore, it is not completely accurate to refer to fair, robust & accurate AI
as the only elements of Trustworthy AI.
14. 14
1.2. ETHICAL AND LEGAL RELEVANCE
OF FAIRNESS, ROBUSTNESS AND ACCURACY
Fairness is not a mere computational problem -> Biases cannot “be fixed” by
algorithms if their roots are not investigated with the involvement of the
communities and individuals concerned.
Legal dimension
• equality and non-discrimination laws
• AIA proposal
• personal data processing
Ethical dimension
• Societal perspective and social justice v.
individual dimension
• Substantive v. procedural dimension
Main takeaways
Fairness
15. 15
1.2. ETHICAL AND LEGAL RELEVANCE
OF FAIRNESS, ROBUSTNESS AND ACCURACY
Main takeaways
• Trustworthiness requirements are not stand-alone -> they can work in synergy (e.g.
disclosure and information about the expected performance of an AI system enhance
both transparency and accuracy) or there can be conflicts (e.g. Perfectly privacy-
enhanced which are inaccurate) -> techniques to balance
• Trustworthiness requirements do not have a universal meaning -> the
contextual understanding of requirements is based on the context of use (e.g. fairness
in detecting disinformation is not the same as in port management) and values
prominent in a certain jurisdiction
18. 18
2. WHAT’S NEXT?
• Streamline with IPT -> more philosophical – ethical focus
• Tailor the legal and ethical analysis of fairness, robustness and accuracy to the use
case scenarios
• Draft the last paragraphs, together with IPT:
2.3.7.4. -> Evaluation of potential gaps or difficulties for the optimization of
fairness, accuracy and robustness by Themis 5.0. from an ethical and legal perspective
(use-case based)
2.3.7.5. -> Provide recommendations with essential considerations for
system developers to keep in mind while setting up the optimization of fairness,
robustness and accuracy by Themis technology
19. 19
3. LIST OF REFERENCES
Bertuzzi, L., (9 November 2023). OECD updates definition of Artificial
Intelligence ‘to inform EU’s AI Act’. Euractiv OECD updates definition of
Artificial Intelligence ‘to inform EU’s AI Act’ – Euractiv
Bietti, E., (2021). From Ethics Washing to Ethics Bashing: A Moral
Philosophy View on Tech Ethics
Bracy, J., (22 January 2024). EU AI Act: Draft consolidated text leaked
online. IAPP EU AI Act: Draft consolidated text leaked online (iapp.org)
Floridi, L. (2023), The Ethics of Artificial Intelligence. Oxford University
Press
Hallinan, D., Borgesius, F. Z., (2020). Opinions can be incorrect (in our
opinion)! On data protection law’s accuracy principle. International Data
Privacy Law, 10 (1).
Heilinger, J. C. (2022). The Ethics of AI Ethics. A Constructive Critique.
Philosophy & Technology, 35 (61).
Hillier, M., (20 February 2023). Why does ChatGPT generate fake
references? Teche Why does ChatGPT generate fake references? - TECHE
(mq.edu.au)
Hoffmann, A. L., (2019). Where fairness fails: data, algorithms, and the
limits of antidiscrimination discourse. Information, Communication &
Society, 22 (7).
Journal of Social Computing 2(3)
Laux, J., Wachter, S., Mittelstadt, B., (2023). Trustworthy artificial
intelligence and the European Union AI act: On the conflation of
trustworthiness and acceptability of risk. Regulation & Governance.
Pasquale, F. (20 August 2018). Odd Numbers. Algorithms alone can’t
meaningfully hold other algorithms accountable. Real Life Odd Numbers
— Real Life (reallifemag.com)
Powles, J., Nissenbaum H., (7 December 2018). The Seductive Diversion
of ‘Solving’ Bias in Artificial Intelligence. OneZero Medium The Seductive
Diversion of ‘Solving’ Bias in Artificial Intelligence | by Julia Powles |
OneZero (medium.com)
Smuha, N. A., Ahmed-Rengers, E., Harkens, A., Wenlong L., MacLaren, J.,
Piselli, R., Yeung, K., (2021). How the EU Can Achieve Legally Trustworthy
AI: A Response to the European Commission’s Proposal for an Artificial
Intelligence Act. https://ssrn.com/abstract=3899991
News and academic articles, books and blog posts
20. 20
3. LIST OF REFERENCES
Charter of Fundamental Rights of the European Union 2012/C 326/02. https://eur-
lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:12012P/TXT
Consolidated Version of the Treaty on European Union 2012 C 326/13. Consolidated
version of the Treaty on European Union (europa.eu)
Consolidated Version of the Treaty on the functioning of the European Union 2012 C
326/47. eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:12012E/TXT
Council of Europe 1950. European Convention on Human Rights. European Convention
on Human Rights (coe.int)
Council of Europe 2023. Consolidated Working draft if the framework convention on
Artificial Intelligence, Human rights, Democracy and Rule of law. 1680abde66 (coe.int)
Council of Europe 2023. Guidelines on the responsible implementation of artificial
intelligence systems in journalism. 1680adb4c6 (coe.int)
Directive 2000:43. The Race Equality Directive. Directive - 2000/43 - EN - EUR-Lex
(europa.eu)
Directive 2000:78. The Employment Equality Directive. Directive - 2000/78 - EN - EUR-
Lex (europa.eu)
Directive 2004:113. The Gender Goods and Services Directive. Directive - 2004/113 -
EN - EUR-Lex (europa.eu)
Directive 2006:54. The Gender Equality Directive. Directive - 2006/54 - EN - EUR-Lex
(europa.eu)
Directive 2555:2022. NIS 2 Directive. Directive - 2022/2555 - EN - EUR-Lex (europa.eu)
European Commission 2021. Proposal for a Regulation, Artificial Intelligence Act
(COM(2021) 206 final) EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu)
European Commission, 2022. Proposal for a Directive on liability for defective
products (COM/2022/495 final) EUR-Lex - 52022PC0495 - EN - EUR-Lex (europa.eu)
European Commission, 2022. Proposal for an AI Liability Directive (COM/2022/496
final) EUR-Lex - 52022PC0496 - EN - EUR-Lex (europa.eu)
European Parliament 2012. Draft Compromise Amendments on the Draft Report
Artificial Intelligence Act version 1.1. Microsoft Word - Consolidated document
CA_IMCOLIBE_AI_ACT.docx (europa.eu)
Regulation 2023:1230. Regulation on machinery. Regulation - 2023/1230 - EN - EUR-
Lex (europa.eu)
Regulation 2023:988. Regulation on general product safety. Regulation - 2023/988 -
EN - EUR-Lex (europa.eu)
Regulation 679:2016. General Data Protection Regulation. Regulation - 2016/679 - EN
- gdpr - EUR-Lex (europa.eu)
Regulation 881: 2019. Cybersecurity Act. Regulation - 2019/881 - EN - EUR-Lex
(europa.eu)
The White House 2023. Executive Order on the Safe, Secure, and Trustworthy
Development and Use of Artificial Intelligence. Executive Order on the Safe, Secure,
and Trustworthy Development and Use of Artificial Intelligence | The White House
Legislation (in force and proposals)
21. 21
3. LIST OF REFERENCES
European Commission (2022). TTC Joint Roadmap for Trustworthy AI and Risk Management TTC Joint Roadmap for
Trustworthy AI and Risk Management | Shaping Europe’s digital future (europa.eu)
European Commission (2023). EU-U.S. Terminology and Taxonomy for Artificial Intelligence EU-U.S. Terminology
and Taxonomy for Artificial Intelligence | Shaping Europe’s digital future (europa.eu)
Independent High-Level Expert Group On Artificial Intelligence set up by the EU Commission (2019). Ethics
guidelines for trustworthy AI https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Independent High-Level Expert Group On Artificial Intelligence set up by the EU Commission (2020). Assessment
List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment Assessment List for Trustworthy Artificial
Intelligence (ALTAI) for self-assessment | Shaping Europe’s digital future (europa.eu)
OECD (2023). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449) OECD Legal
Instruments
OECD (ongoing). Catalogue of Tools & Metrics for Trustworthy AI Metrics for Trustworthy AI - OECD.AI
Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A. and Hall, P. (2022). Towards a Standard for Identifying and
Managing Bias in Artificial Intelligence, Special Publication (NIST SP), National Institute of Standards and
Technology, https://doi.org/10.6028/NIST.SP.1270
European Equality Law Network on behalf of the EU Commission (2023, 26 January). A comparative analysis of
non-discrimination law in Europe 2022 Comparative analyses - European Equality Law Network
European Equality Law Network on behalf of the EU Commission (2023, 18 January). A comparative analysis of
gender equality law in Europe 2022 Comparative analyses - European Equality Law Network
G20 (September 2023, 9-10). New Delhi Leaders’ Declaration Microsoft Word - New Delhi Leaders' Declaration
Final Adoption (europa.eu)
European Commission (2023). Hiroshima Process International Guiding Principles for Advanced AI system
Hiroshima Process International Guiding Principles for Advanced AI system | Shaping Europe’s digital future
(europa.eu)
European Commission (2023). Hiroshima Process International Code of Conduct for Advanced AI Systems
https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-
systems
UNESCO (2023). Recommendation on the Ethics of Artificial Intelligence Recommendation on the Ethics of Artificial
Intelligence | UNESCO
UNESCO (2023). Ethical Impact Assessment: A Tool of the Recommendation on the Ethics of Artificial Intelligence
Ethical Impact Assessment: A Tool of the Recommendation on the Ethics of Artificial Intelligence | UNESCO
World Health Organization (2023). Regulatory considerations on artificial intelligence for health 9789240078871-
eng.pdf (who.int)
Tabassi, E. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Trustworthy and
Responsible AI, National Institute of Standards and Technology, https://doi.org/10.6028/NIST.AI.100-1
GOV. UK, Department for Science, Innovation and Technology, (2023, 29 March). A pro-innovation approach to AI
regulation. AI regulation: a pro-innovation approach - GOV.UK (www.gov.uk)
GOV. UK, (2023, November 1-2). The Bletchley Declaration by Countries Attending the AI Safety Summit, The
Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 - GOV.UK (www.gov.uk)
European Commission (2018, April 25) Communication on Artificial Intelligence for Europe (Report COM(2018) 237
final) AI Communication (europa.eu)
G20 (2019, June 9). Ministerial Statement on Trade and Digital Economy. 2019 G20 Ministerial Statement on Trade
and Digital Economy (utoronto.ca)
European Commission (2020, February 19) White Paper on Artificial Intelligence. A European approach to
excellence and trust (Report COM(2020) 65 final) commission-white-paper-artificial-intelligence-feb2020_en.pdf
(europa.eu)
The White House (2022, 4 October) Blueprint for an AI Bill of Rights https://www.whitehouse.gov/ostp/ai-bill-of-
rights/
Soft Law, Guidelines and Policy documents
22. 22
3. LIST OF REFERENCES
Case law and other binding decisions
Axel Springer v. Germany, Application no. 39954/08, (European Court of Human Rights, 2012) AXEL SPRINGER AG v. GERMANY (coe.int)
Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems, Case C-311/18 (ECJ, Grand Chamber 2020) EUR-Lex -
62018CJ0311 - EN - EUR-Lex (europa.eu)
Irish SA v. TikTok Technology Limited, 2, (European Data Protection Board , 2023)
edpb_bindingdecision_202302_ie_sa_ttl_children_en.pdf (europa.eu)
Maximillian Schrems v. Data Protection Commissioner, Case C-362/14 (ECJ, Grand Chamber 2015) EUR-Lex - 62014CJ0362 - EN - EUR-Lex
(europa.eu)
Mosley v. UK, Application no. 48009/08, (European Court of Human Rights, 2011) MOSLEY v. THE UNITED KINGDOM (coe.int)
von Hannover v. Germany n.1,Application no. 59320/00, (European Court of Human Rights, 2004) VON HANNOVER v. GERMANY (coe.int)
The ethical dimension of fairness is broader than the legal one. While the legal dimension is mainly concerned with equality and non-discrimination, the ethical concept embraces also the societal perspective, taking into consideration social justice instances addressing systemic oppressions, which are less addressed by the legal dimension (Hoffmann, 2019).
In particular, fairness entails the substantive (equality before the law, non-discrimination and avoidance of biases, equal and just distribution of opportunities and cost, proportionality between means and ends) and the procedural dimension (redress against AI-enabled or assisted decisions, accountability of the system and its human operators, explicability of the decision-making process followed by the AI system, participatory design of the AI system). Both substantive and procedural fairness entail an individual dimension (e.g. equality before the law and non-discrimination, redress against decisions) and a societal/group dimension (e.g. equal and just distribution of opportunities and costs, participatory design of the AI system).
In the health domain fairness plays a crucial role, as it is of utmost importance to ensure that the AI system does not discriminate the patients, in the design of the model, in the collection of datasets used by the model and in the use made by the model.
In the media domain, the principle of proportionality, which falls under the fairness principle, gains prominence, since it is particularly relevant to ensure that the detection of disinformation practices does not impair the freedom of expression.
Instead, in the transportation domain, as port management, the principle of fairness plays a minor role, since the AI system has likely negligible implications in terms of discrimination and equality.
Similar specifications apply to the other principles, as the technical robustness of an AI system employed for port management entails safety considerations of a completely different scale than those considered in cancer detection. While an AI system used for disinformation detection likely does not entail relevant considerations in terms of technical robustness but has significant implications in terms of societal robustness, ensuring democratic knowledge circulation and evolution