The document discusses connecting the ethics and epistemology of AI. It argues that ethics and epistemology should be considered together throughout the entire AI process, from design to use, rather than as separate post-hoc assessments. The authors propose an "ethics-cum-epistemology" approach where normative and epistemological questions are addressed at each stage. They also analyze scenarios involving experts and non-experts to show how ethics and epistemology can be addressed depending on levels of expertise between actors. The goal is to shift the focus from AI outcomes to the full process in order to better ensure AI systems are developed and used responsibly.
Virginia Dignum – Responsible artificial intelligenceNEXTConference
As Artificial Intelligence (AI) systems are increasingly making decisions that directly affect users and society, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? Should artificial systems ever be treated as ethical entities? What are the legal and ethical consequences of human enhancement technologies, or cyber-genetic technologies? How should moral, societal and legal values be part of the design process? In this talk, we look at ways to ensure ethical behaviour by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. We will in particular focus on the ART principles for AI: Accountability, Responsibility, Transparency.
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?
Lecture notes of the course Future Models I (AR1TWF030), The Why Factory, Directed by Prof. Winy Mass, TU Delft, Faculty of Architecture and Built Environment
Presentation on the IEEE P7000series standards, and the P7003 standard for Algorithmic Bias Considerations, at the EC JRC HUman behaviour and MAchine INTelligence (HUMAINT) project kick-off workshop, March 2018
Virginia Dignum – Responsible artificial intelligenceNEXTConference
As Artificial Intelligence (AI) systems are increasingly making decisions that directly affect users and society, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? Should artificial systems ever be treated as ethical entities? What are the legal and ethical consequences of human enhancement technologies, or cyber-genetic technologies? How should moral, societal and legal values be part of the design process? In this talk, we look at ways to ensure ethical behaviour by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems. We will in particular focus on the ART principles for AI: Accountability, Responsibility, Transparency.
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?
Lecture notes of the course Future Models I (AR1TWF030), The Why Factory, Directed by Prof. Winy Mass, TU Delft, Faculty of Architecture and Built Environment
Presentation on the IEEE P7000series standards, and the P7003 standard for Algorithmic Bias Considerations, at the EC JRC HUman behaviour and MAchine INTelligence (HUMAINT) project kick-off workshop, March 2018
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. - Wikipedia
Applying AI to software engineering problems: Do not forget the human!University of Córdoba
The application of artificial intelligence (AI) to software engineering (SE)-problem-solving has been around since the 80s when expert systems were first used. However, it is during the last 10 years that there has been a peak in the use of these techniques, first based on search and optimisation algorithms such as metaheuristics, and later based on machine learning algorithms. The aim is to help the software engineer to automate and optimise tasks of the software development process, and to use valuable information hidden in multiple data sources such as software repositories to execute insightful actions that generate improvements in the performance of the overall process. Today, the use of AI is trendy, and often overused as it could generate artificial results since it does not consider the subjective nature of the software development process requiring the experience and know-how of the engineer. With this Invited Talk, we will discuss different proposals to incorporate the human into the decision-making process in the application of AI for SE (AI4SE), from interactive algorithms to the generation of interpretable models or explanations.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
Simulation in Social Sciences - Lecture 6 in Introduction to Computational S...Lauri Eloranta
Sixth lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
Ethics and Responsible AI Deployment
Abstract: As Artificial Intelligence (AI) becomes more prevalent, protecting personal privacy is a critical ethical issue that must be addressed. This article explores the need for ethical AI systems that safeguard individual privacy while complying with ethical standards. By taking a multidisciplinary approach, the research examines innovative algorithmic techniques such as differential privacy, homomorphic encryption, federated learning, international regulatory frameworks, and ethical guidelines. The study concludes that these algorithms effectively enhance privacy protection while balancing the utility of AI with the need to protect personal data. The article emphasises the importance of a comprehensive approach that combines technological innovation with ethical and regulatory strategies to harness the power of AI in a way that respects and protects individual privacy.
Artificial intelligence (AI) has the potential to significantly impact employment, social equity, and economic systems in ways that require careful ethical analysis and aggressive legislative measures to mitigate negative consequences. This means that the implications of AI in different industries, such as healthcare, finance, and transportation, must be carefully considered.
Due to the global nature of AI technology, global collaboration must be fostered to establish standards and regulatory frameworks that transcend national boundaries. This includes the establishment of ethical guidelines that AI researchers and developers worldwide should follow.
To address emergent ethical concerns with AI, future research must focus on several recommendations. Firstly, ethical considerations must be integrated into the design phase of AI systems and not treated as an afterthought. This is known as "Ethics by Design" and involves incorporating ethical standards during the development phase of AI systems to ensure that the technology aligns with ethical principles.
Secondly, interdisciplinary research that combines AI, ethics, law, social science, and other relevant domains should be promoted to produce well-rounded solutions to ethical dilemmas. This requires the participation of experts from different fields to identify and address ethical issues.
Thirdly, regulatory frameworks must be dynamic and adaptive to keep pace with the rapid evolution of AI technologies. This means that regulatory frameworks must be flexible enough to accommodate changes in AI technology while ensuring ethical standards are maintained.
Fourthly, empirical research should be conducted to understand the real-world implications of AI systems on individuals and society, which can then inform ethical principles and policies. This means that empirical data must be collected to understand how AI affects people in different contexts.
Finally, risk assessment procedures should be improved to better analyse the ethical hazards associated with AI applications.
'A critique of testing' UK TMF forum January 2015 Georgina Tilby
This presentation draws upon the 'Critique of Testing' Ebook that was discussed at January's UK TMF forum. The slides explore the fundamental concepts of test case design and provide a detailed analysis of each method in terms of them.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. - Wikipedia
Applying AI to software engineering problems: Do not forget the human!University of Córdoba
The application of artificial intelligence (AI) to software engineering (SE)-problem-solving has been around since the 80s when expert systems were first used. However, it is during the last 10 years that there has been a peak in the use of these techniques, first based on search and optimisation algorithms such as metaheuristics, and later based on machine learning algorithms. The aim is to help the software engineer to automate and optimise tasks of the software development process, and to use valuable information hidden in multiple data sources such as software repositories to execute insightful actions that generate improvements in the performance of the overall process. Today, the use of AI is trendy, and often overused as it could generate artificial results since it does not consider the subjective nature of the software development process requiring the experience and know-how of the engineer. With this Invited Talk, we will discuss different proposals to incorporate the human into the decision-making process in the application of AI for SE (AI4SE), from interactive algorithms to the generation of interpretable models or explanations.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
Simulation in Social Sciences - Lecture 6 in Introduction to Computational S...Lauri Eloranta
Sixth lecture of the course CSS01: Introduction to Computational Social Science at the University of Helsinki, Spring 2015.(http://blogs.helsinki.fi/computationalsocialscience/).
Lecturer: Lauri Eloranta
Questions & Comments: https://twitter.com/laurieloranta
Ethics and Responsible AI Deployment
Abstract: As Artificial Intelligence (AI) becomes more prevalent, protecting personal privacy is a critical ethical issue that must be addressed. This article explores the need for ethical AI systems that safeguard individual privacy while complying with ethical standards. By taking a multidisciplinary approach, the research examines innovative algorithmic techniques such as differential privacy, homomorphic encryption, federated learning, international regulatory frameworks, and ethical guidelines. The study concludes that these algorithms effectively enhance privacy protection while balancing the utility of AI with the need to protect personal data. The article emphasises the importance of a comprehensive approach that combines technological innovation with ethical and regulatory strategies to harness the power of AI in a way that respects and protects individual privacy.
Artificial intelligence (AI) has the potential to significantly impact employment, social equity, and economic systems in ways that require careful ethical analysis and aggressive legislative measures to mitigate negative consequences. This means that the implications of AI in different industries, such as healthcare, finance, and transportation, must be carefully considered.
Due to the global nature of AI technology, global collaboration must be fostered to establish standards and regulatory frameworks that transcend national boundaries. This includes the establishment of ethical guidelines that AI researchers and developers worldwide should follow.
To address emergent ethical concerns with AI, future research must focus on several recommendations. Firstly, ethical considerations must be integrated into the design phase of AI systems and not treated as an afterthought. This is known as "Ethics by Design" and involves incorporating ethical standards during the development phase of AI systems to ensure that the technology aligns with ethical principles.
Secondly, interdisciplinary research that combines AI, ethics, law, social science, and other relevant domains should be promoted to produce well-rounded solutions to ethical dilemmas. This requires the participation of experts from different fields to identify and address ethical issues.
Thirdly, regulatory frameworks must be dynamic and adaptive to keep pace with the rapid evolution of AI technologies. This means that regulatory frameworks must be flexible enough to accommodate changes in AI technology while ensuring ethical standards are maintained.
Fourthly, empirical research should be conducted to understand the real-world implications of AI systems on individuals and society, which can then inform ethical principles and policies. This means that empirical data must be collected to understand how AI affects people in different contexts.
Finally, risk assessment procedures should be improved to better analyse the ethical hazards associated with AI applications.
'A critique of testing' UK TMF forum January 2015 Georgina Tilby
This presentation draws upon the 'Critique of Testing' Ebook that was discussed at January's UK TMF forum. The slides explore the fundamental concepts of test case design and provide a detailed analysis of each method in terms of them.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Sectors of the Indian Economy - Class 10 Study Notes pdf
Connecting the epistemology and ethics of AI
1. CONNECTING THE ETHICS
AND EPISTEMOLOGY OF AI
FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS
UNIVERSITY OF AMSTERDAM
@FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
2. OUTLINE
● From Ethics aut Epistemology to Ethics cum Epistemology
○ Disconnected projects
○ Ethics as a post-hoc assessment
○ Shifting focus from output to process
○ Ethics as continuous assessment, from design to use
● What can XAI learn from argumentation theory?
o A crash course on arguments from expert opinion
o 4 simplified scenarios
o A normative stance for real scenarios
2
4. DISCONNECTED PROJECTS
• Disconnected projects:
• [Ethics] Questions of how to make AI ethically compliant, ensuring that algorithms are as fair as
possible and as unbiased as possible.
• [Epistemology] Questions of transparency / opacity of AI, i.e. , AI as a glass or opaque box.
• Our approach:
• Not whether there is an intrinsic value in XAI, but how questions of epistemology bear on ethics,
and vice-versa
• Broader than value-sensitive design, we care about the whole process from design to use, and
considering multiple actors
• ‘Ethics’ as shorthand for ‘axiology’: values as to include social aims etc
4
5. ETHICS AS POST HOC ASSESSMENT
• AI raises important ethical concerns, therefore we need to produce suitable
mechanisms:
• To audit ethics compliance
• To verify responsibility and accountability
• A number of excellent protocols exist, and they are valuable
• Yet, some scholars criticize Ethics of AI for being much ‘window-dressing’
• Rather, we aim to contribute to the ‘scaffolding’
5
Post-hoc assessment
6. ‘STAND ALONE’ EPISTEMOLOGY
• A vast, rich, fast-growing debate on epistemology of AI
• When / how is an AI reliable, trustworthy? Hence, under which conditions can we trust
the outcome of an AI
• Lots hinges on definition of transparency | opacity | accuracy | explainability |…
• With wide agreement that most AI systems are opaque
• So, how can we trust outcomes of opaque AI?
• But the whole debate is orthogonal to ethics concerns
6
8. COMPUTATIONAL RELIABILISM:
WHAT SHOULD WE TRUST?
(CR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive agent,
p is any truth‐valued proposition related to the results of a computer simulation, t is any given time, and
m is a reliable computer simulation. (Durán, Durán & Formanek)
• Not ‘more transparency’, but focus on process that makes output reliable
• CR indicators: verification and validation methods; robustness analysis; a history of (un)successful
implementations; expert knowledge
• We build on CR to:
• Include values in CR more explicitly
• Re-enter considerations about transparency
• Include more actors explicitly
8
9. ETHICAL COMPUTATIONAL RELIABILISM
(ECR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive
agent, p is any truth-valued proposition related to the results of an AI, t is any given time, and m is a
reliable algorithmic mediation without (intentionally) generating foreseeable asymmetric harm patterns
to vulnerable populations.
• We need to make purpose explicit
• One purpose is to not intentionally harm vulnerable populations
• Ex-ante ethical assessment is key
• It may raise costs at the beginning, but reduce litigation costs later
• ECR is no magic bullet
• Prevention, accountability, and remedy of some unforeseeable asymmetric harm patterns may still be outside ECR
• We need complementary design, assessment, regulatory mechanisms in place, and at different levels
9
10. RE-INTRODUCING TRANSPARENCY
• Creel’s 3 types of transparency:
• Functional: “knowledge of the algorithmic functioning of the whole”
• Structural: “knowledge of how the algorithm was realized in code”
• Run: “knowledge of the program as it was actually run in a particular instance,
including the hardware and input data used”
• Creel’s 3 types of transparencies help us:
• Focus on epistemology
• Introducing actors explicitly: transparency for whom?
• Hinted at in Creel, but not developed in her work
10
12. PLAN: PROCESS + VALUES + ACTORS
• We build on CR and on Creel’s account of transparency:
• Look at the whole process (=design, implementation, use) before the outcome
• Consider which values enter at each stage and how
• Consider how different actors answer differently to epistemological and ethics queries
12
13. LESSONS FROM PHIL SCI ON MODEL VALIDATION
• ‘Model validation’ in a restricted CS sense = adequacy of the model with respect
to empirical data
• Model validation is a broader Phil Sci sense = how to trust the whole process?
• Formulation of research hypothesis, selection of background knowledge and theory,
interpretation of results, possibly use (e.g. in policy), …
• Algorithmic procedures are a case in point, not special with respect to other
modelling strategies in science & technology
13
14. WHERE IS ETHICS IN MODEL VALIDATION?
• At each and every point of the process we can (should) make considerations
about
• Epistemology > transparency, explainability, validation/verification, …
• Ethics > which values are operationalized? What is intended? What is foreseeable?
How?
• We agree with Kearns & Roth: values can be operationalised
• Unlike Kearns & Roth, it is not a trade-off, but it is a design choice, proper
14
15. ETHICS AS CONTINUOUS ASSESSMENT
• Ethical considerations have to be raised
• Already at the design stage
• Throughout the whole process
• And in combination with epistemological / technical considerations
• Epistemology-cum-Ethics: the way forward for XAI
• We care about the role of designers, programmers, engineers and other actors too
• Holistic model validation and glass-box epistemology ensure the possibility of inspecting the
system at any time
15
16. SYNERGIES
• Ethics-cum-Epistemology complements existing approaches:
• Ethics auditing (post-hoc), see e.g. Mokander & Floridi
• Ethics training, see e.g. Bezuidenhout & Ratti
• [We definitively need a solid legal framework too]
16
17. TO RECAP THE ARGUMENT SO FAR
• To trust an outcome we need to look at the process
• We expand CR into ECR, ‘re-inject’ 3 types of transparency, enlarge it to ‘holistic model
validation’
• With ‘holistic model validation’ we claim that values enter at each and every step of the
process
• This is how we connect epistemology and ethics
• Next question: who can assess the process and how?
17
20. DISCLAIMERS
• We are aware that expertise
• Is not binary, but has shades of gray
• Can overlap across experts, and across groups of experts
• Can be ascribed to non-human agents too
• For simplicity, we
• Confine the discussion to human experts
• Consider that expertise concerns ‘technical features’ of an AI system, and that
• Actors have or do not have such expertise
20
21. NOTATION: EPISTEMIC SYMMETRY
EPISTEMOLOGICAL QUERIES
• Expert A: Can I trust output of
algorithm G?
• Expert B: Yes. Look at technical
features XYZ.
NORMATIVE QUERIES
• Expert A: Is the algorithm G fair?
• Expert B: Yes. Look at technical
features XYZ.
21
Experts A, B have equal or comparable expertise
22. NOTATION: EPISTEMIC ASYMMETRY
EPISTEMOLOGICAL QUERIES
• Non-expert: Can I trust output of
algorithm G?
• Expert: Yes. You can trust my
expertise in designing and
implementing technical features XYZ.
NORMATIVE QUERIES
• Non-expert: Is algorithm G fair?
• Expert: Yes. You can trust I comply
with ethics requirements, as
mandated by institution Y.
22
Non-expert cannot assess technical details, s/he has to trust Expert
24. LEARNING FROM ARGUMENTATION THEORY
ARGUMENT FROM EXPERT OPINION
p is true, because p is said by expert E
POSSIBLE CRITICAL QUESTIONS
Is E really an expert about p?
Did expert E really said p?
Do other experts agree?
Are there other interests at play?
24
Check which form of institutionalization
guarantees trusting the source of
expertise
Check contents p said by E
Confront p with other expert opinions
Check other institutional guarantees
25. SIMPLIFIED SCENARIO 1:
EPISTEMIC SYMMETRY OF EXPERTS
• Expert A: “How did you get to result
X?”
• Expert B: “Because the system is
designed such-and-such”
• Expert A: “Is your AI system fair and
transparent?”
• B: “Yes, I operationalized concepts
XYZ in such-and-such way”
25
A question about
epistemology of AI
Expert B gives technical
details about AI system
A question about ethics
of AI
Expert B gives technical
details about how AI
system is ethical
26. SIMPLIFIED SCENARIO 1:
EPISTEMIC SYMMETRY OF EXPERTS
• Expert A: “How did you get to result
X?”
• Expert B: “Because the system is
designed such-and-such”
• Expert A: “Is your AI system fair and
transparent?”
• B: “Yes, I operationalized concepts
XYZ in such-and-such way”
26
A question about
epistemology of AI
Expert B gives technical
details about AI system
A question about ethics
of AI
Expert B gives technical
details about how AI
system is ethical
In case of epistemic symmetry between experts, both epistemological
and ethical questions can be answered with technical details of AI
27. SIMPLIFIED SCENARIO 2A:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because AI said you are in
reference class XYZ”
• Non-expert: “Is your AI fair and
unbiased?”
• B: “Yes, I operationalized XYZ in
such-and-such way”
27
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s technical answer
is meaningless to non-
expert
28. SIMPLIFIED SCENARIO 2A:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because AI said you are in
reference class XYZ”
• Non-expert: “Is your AI fair and
unbiased?”
• B: “Yes, I operationalized XYZ in
such-and-such way”
28
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s technical answer
is meaningless to non-
expert
In case of epistemic Asymmetry between experts,
Non-expert cannot grasp both epistemological and ethical issues
with technical details of AI
29. SIMPLIFIED SCENARIO 2B:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because the system said you
are in reference class XYZ”
• Non-expert: “Is your AI system fair and
transparent?”
• Expert: “Yes, our research and
algorithms comply with standards and
codes of conduct XYZ”
29
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s answer appeals
to axiology +
institutionalization
30. SIMPLIFIED SCENARIO 2B:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because the system said you
are in reference class XYZ”
• Non-expert: “Is your AI system fair and
transparent?”
• Expert: “Yes, our research and
algorithms comply with standards and
codes of conduct XYZ”
30
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s answer appeals
to axiology +
institutionalization
In case of epistemic Asymmetry, both epistemological and ethical questions
are answered appealing to axiology and institutionalization: the non-
experts trusts that the process complies with institutionalized standards
31. SIMPLIFIED SCENARIO 3:
EPISTEMIC SYMMETRY OF NON-EXPERTS
• Non-expert A: “My request for a loan
was rejected, why?”
• Non-expert B: “Because AI said you
don’t comply with XYZ”
• Non-expert A: “Is your AI system fair
and unbiased?”
• Non-expert B: “Yes, our bank is part
of the EU Federation of Ethical
Banks”
31
A question about
epistemology of AI
Non-expert cannot give
details about process,
only output
A question about ethics
of AI
Non experts answers
epistemological and
ethical questions with
instititutionalization
32. SIMPLIFIED SCENARIO 3:
EPISTEMIC SYMMETRY OF NON-EXPERTS
• Non-expert A: “My request for a loan
was rejected, why?”
• Non-expert B: “Because AI said you
don’t comply with XYZ”
• Non-expert A: “Is your AI system fair
and unbiased?”
• Non-expert B: “Yes, our bank is part
of the EU Federation of Ethical
Banks”
32
A question about
epistemology of AI
Non-expert cannot give
details about process,
only output
A question about ethics
of AI
Non experts answers
epistemological and
ethical questions with
instititutionalization
In case of epistemic symmetry between non-experts epistemological
and ethical questions are answered appealing to axiology and
institutionalization: the non-experts trusts that the process complies
with institutionalized standards
33. FROM SIMPLIFIED SCENARIOS TO REAL SCENARIOS
• How to make ‘ethics-cum-epistemology’ normative
• Requests of ethical compliance have to be anticipated with clear and accessible coding
documentation
• High standards on ethics are not a compromise on e.g. on efficiency, but a positive
stance about e.g. fairness and transparency
• Kearns & Roth: a trade-off
• Russo-Schliesser-Wagemans: value-promoting
• Easier said than done, many questions about the governance of ‘ethical XAI’ still need
to be addressed, e.g.:
• Should we aim for ‘more institutionalization’ as safeguard?
• What could be a better use of ethics forms and guidelines?
33
35. EPISTEMOLOGICAL AND NORMATIVE
• It is high time that epistemological and normative questions are considered
together, rather than separately
• To develop an ethics-cum-epistemology, we shift focus from the outcome to the
whole process
• At each stage of the whole process, normative and epistemic questions have to
be considered
• Ethics is continuous assessment, rather than post-hoc
35
Epistemological
Queries
Normative
Queries
36. ARGUMENTS FROM EXPERT OPINION AND AI
• With an ethics-cum-epistemology, and with the aid of argumentation theory, we
account for situations of epistemic symmetry and asymmetry
• In epistemic symmetry, both epistemological and normative questions can be
answered at technical level
• In epistemic asymmetry, axiology and institutionalization help address both
epistemological and normative questions
36
Expert
Non-expert
37. CONNECTING THE ETHICS
AND EPISTEMOLOGY OF AI
FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS
UNIVERSITY OF AMSTERDAM
@FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
Thanks for your attention
Editor's Notes
Thanks
Honoured
The seed money project and the collaboration
Our respective expertise, hear different voices at different moments