SlideShare a Scribd company logo
1 of 37
CONNECTING THE ETHICS
AND EPISTEMOLOGY OF AI
FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS
UNIVERSITY OF AMSTERDAM
@FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
OUTLINE
● From Ethics aut Epistemology to Ethics cum Epistemology
○ Disconnected projects
○ Ethics as a post-hoc assessment
○ Shifting focus from output to process
○ Ethics as continuous assessment, from design to use
● What can XAI learn from argumentation theory?
o A crash course on arguments from expert opinion
o 4 simplified scenarios
o A normative stance for real scenarios
2
FROM ETHICS AUT EPISTEMOLOGY
TO ETHICS CUM EPISTEMOLOGY
3
DISCONNECTED PROJECTS
• Disconnected projects:
• [Ethics] Questions of how to make AI ethically compliant, ensuring that algorithms are as fair as
possible and as unbiased as possible.
• [Epistemology] Questions of transparency / opacity of AI, i.e. , AI as a glass or opaque box.
• Our approach:
• Not whether there is an intrinsic value in XAI, but how questions of epistemology bear on ethics,
and vice-versa
• Broader than value-sensitive design, we care about the whole process from design to use, and
considering multiple actors
• ‘Ethics’ as shorthand for ‘axiology’: values as to include social aims etc
4
ETHICS AS POST HOC ASSESSMENT
• AI raises important ethical concerns, therefore we need to produce suitable
mechanisms:
• To audit ethics compliance
• To verify responsibility and accountability
• A number of excellent protocols exist, and they are valuable
• Yet, some scholars criticize Ethics of AI for being much ‘window-dressing’
• Rather, we aim to contribute to the ‘scaffolding’
5
Post-hoc assessment
‘STAND ALONE’ EPISTEMOLOGY
• A vast, rich, fast-growing debate on epistemology of AI
• When / how is an AI reliable, trustworthy? Hence, under which conditions can we trust
the outcome of an AI
• Lots hinges on definition of transparency | opacity | accuracy | explainability |…
• With wide agreement that most AI systems are opaque
• So, how can we trust outcomes of opaque AI?
• But the whole debate is orthogonal to ethics concerns
6
SHIFTING FOCUS:
FROM OUTCOME TO PROCESS
7
COMPUTATIONAL RELIABILISM:
WHAT SHOULD WE TRUST?
(CR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive agent,
p is any truth‐valued proposition related to the results of a computer simulation, t is any given time, and
m is a reliable computer simulation. (Durán, Durán & Formanek)
• Not ‘more transparency’, but focus on process that makes output reliable
• CR indicators: verification and validation methods; robustness analysis; a history of (un)successful
implementations; expert knowledge
• We build on CR to:
• Include values in CR more explicitly
• Re-enter considerations about transparency
• Include more actors explicitly
8
ETHICAL COMPUTATIONAL RELIABILISM
(ECR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive
agent, p is any truth-valued proposition related to the results of an AI, t is any given time, and m is a
reliable algorithmic mediation without (intentionally) generating foreseeable asymmetric harm patterns
to vulnerable populations.
• We need to make purpose explicit
• One purpose is to not intentionally harm vulnerable populations
• Ex-ante ethical assessment is key
• It may raise costs at the beginning, but reduce litigation costs later
• ECR is no magic bullet
• Prevention, accountability, and remedy of some unforeseeable asymmetric harm patterns may still be outside ECR
• We need complementary design, assessment, regulatory mechanisms in place, and at different levels
9
RE-INTRODUCING TRANSPARENCY
• Creel’s 3 types of transparency:
• Functional: “knowledge of the algorithmic functioning of the whole”
• Structural: “knowledge of how the algorithm was realized in code”
• Run: “knowledge of the program as it was actually run in a particular instance,
including the hardware and input data used”
• Creel’s 3 types of transparencies help us:
• Focus on epistemology
• Introducing actors explicitly: transparency for whom?
• Hinted at in Creel, but not developed in her work
10
HOLISTIC MODEL VALIDATION
An epistemology for glass-box AI
11
PLAN: PROCESS + VALUES + ACTORS
• We build on CR and on Creel’s account of transparency:
• Look at the whole process (=design, implementation, use) before the outcome
• Consider which values enter at each stage and how
• Consider how different actors answer differently to epistemological and ethics queries
12
LESSONS FROM PHIL SCI ON MODEL VALIDATION
• ‘Model validation’ in a restricted CS sense = adequacy of the model with respect
to empirical data
• Model validation is a broader Phil Sci sense = how to trust the whole process?
• Formulation of research hypothesis, selection of background knowledge and theory,
interpretation of results, possibly use (e.g. in policy), …
• Algorithmic procedures are a case in point, not special with respect to other
modelling strategies in science & technology
13
WHERE IS ETHICS IN MODEL VALIDATION?
• At each and every point of the process we can (should) make considerations
about
• Epistemology > transparency, explainability, validation/verification, …
• Ethics > which values are operationalized? What is intended? What is foreseeable?
How?
• We agree with Kearns & Roth: values can be operationalised
• Unlike Kearns & Roth, it is not a trade-off, but it is a design choice, proper
14
ETHICS AS CONTINUOUS ASSESSMENT
• Ethical considerations have to be raised
• Already at the design stage
• Throughout the whole process
• And in combination with epistemological / technical considerations
• Epistemology-cum-Ethics: the way forward for XAI
• We care about the role of designers, programmers, engineers and other actors too
• Holistic model validation and glass-box epistemology ensure the possibility of inspecting the
system at any time
15
SYNERGIES
• Ethics-cum-Epistemology complements existing approaches:
• Ethics auditing (post-hoc), see e.g. Mokander & Floridi
• Ethics training, see e.g. Bezuidenhout & Ratti
• [We definitively need a solid legal framework too]
16
TO RECAP THE ARGUMENT SO FAR
• To trust an outcome we need to look at the process
• We expand CR into ECR, ‘re-inject’ 3 types of transparency, enlarge it to ‘holistic model
validation’
• With ‘holistic model validation’ we claim that values enter at each and every step of the
process
• This is how we connect epistemology and ethics
• Next question: who can assess the process and how?
17
XAI AND
ARGUMENTS FROM EXPERT OPINION
18
DIMENSIONS OF INQUIRY
19
Epistemological
Queries
Normative
Queries
Expert
Non-expert
DISCLAIMERS
• We are aware that expertise
• Is not binary, but has shades of gray
• Can overlap across experts, and across groups of experts
• Can be ascribed to non-human agents too
• For simplicity, we
• Confine the discussion to human experts
• Consider that expertise concerns ‘technical features’ of an AI system, and that
• Actors have or do not have such expertise
20
NOTATION: EPISTEMIC SYMMETRY
EPISTEMOLOGICAL QUERIES
• Expert A: Can I trust output of
algorithm G?
• Expert B: Yes. Look at technical
features XYZ.
NORMATIVE QUERIES
• Expert A: Is the algorithm G fair?
• Expert B: Yes. Look at technical
features XYZ.
21
Experts A, B have equal or comparable expertise
NOTATION: EPISTEMIC ASYMMETRY
EPISTEMOLOGICAL QUERIES
• Non-expert: Can I trust output of
algorithm G?
• Expert: Yes. You can trust my
expertise in designing and
implementing technical features XYZ.
NORMATIVE QUERIES
• Non-expert: Is algorithm G fair?
• Expert: Yes. You can trust I comply
with ethics requirements, as
mandated by institution Y.
22
Non-expert cannot assess technical details, s/he has to trust Expert
Can we accept arguments like these? How?
LEARNING FROM ARGUMENTATION THEORY
ARGUMENT FROM EXPERT OPINION
p is true, because p is said by expert E
POSSIBLE CRITICAL QUESTIONS
Is E really an expert about p?
Did expert E really said p?
Do other experts agree?
Are there other interests at play?
24
Check which form of institutionalization
guarantees trusting the source of
expertise
Check contents p said by E
Confront p with other expert opinions
Check other institutional guarantees
SIMPLIFIED SCENARIO 1:
EPISTEMIC SYMMETRY OF EXPERTS
• Expert A: “How did you get to result
X?”
• Expert B: “Because the system is
designed such-and-such”
• Expert A: “Is your AI system fair and
transparent?”
• B: “Yes, I operationalized concepts
XYZ in such-and-such way”
25
A question about
epistemology of AI
Expert B gives technical
details about AI system
A question about ethics
of AI
Expert B gives technical
details about how AI
system is ethical
SIMPLIFIED SCENARIO 1:
EPISTEMIC SYMMETRY OF EXPERTS
• Expert A: “How did you get to result
X?”
• Expert B: “Because the system is
designed such-and-such”
• Expert A: “Is your AI system fair and
transparent?”
• B: “Yes, I operationalized concepts
XYZ in such-and-such way”
26
A question about
epistemology of AI
Expert B gives technical
details about AI system
A question about ethics
of AI
Expert B gives technical
details about how AI
system is ethical
In case of epistemic symmetry between experts, both epistemological
and ethical questions can be answered with technical details of AI
SIMPLIFIED SCENARIO 2A:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because AI said you are in
reference class XYZ”
• Non-expert: “Is your AI fair and
unbiased?”
• B: “Yes, I operationalized XYZ in
such-and-such way”
27
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s technical answer
is meaningless to non-
expert
SIMPLIFIED SCENARIO 2A:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because AI said you are in
reference class XYZ”
• Non-expert: “Is your AI fair and
unbiased?”
• B: “Yes, I operationalized XYZ in
such-and-such way”
28
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s technical answer
is meaningless to non-
expert
In case of epistemic Asymmetry between experts,
Non-expert cannot grasp both epistemological and ethical issues
with technical details of AI
SIMPLIFIED SCENARIO 2B:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because the system said you
are in reference class XYZ”
• Non-expert: “Is your AI system fair and
transparent?”
• Expert: “Yes, our research and
algorithms comply with standards and
codes of conduct XYZ”
29
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s answer appeals
to axiology +
institutionalization
SIMPLIFIED SCENARIO 2B:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because the system said you
are in reference class XYZ”
• Non-expert: “Is your AI system fair and
transparent?”
• Expert: “Yes, our research and
algorithms comply with standards and
codes of conduct XYZ”
30
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s answer appeals
to axiology +
institutionalization
In case of epistemic Asymmetry, both epistemological and ethical questions
are answered appealing to axiology and institutionalization: the non-
experts trusts that the process complies with institutionalized standards
SIMPLIFIED SCENARIO 3:
EPISTEMIC SYMMETRY OF NON-EXPERTS
• Non-expert A: “My request for a loan
was rejected, why?”
• Non-expert B: “Because AI said you
don’t comply with XYZ”
• Non-expert A: “Is your AI system fair
and unbiased?”
• Non-expert B: “Yes, our bank is part
of the EU Federation of Ethical
Banks”
31
A question about
epistemology of AI
Non-expert cannot give
details about process,
only output
A question about ethics
of AI
Non experts answers
epistemological and
ethical questions with
instititutionalization
SIMPLIFIED SCENARIO 3:
EPISTEMIC SYMMETRY OF NON-EXPERTS
• Non-expert A: “My request for a loan
was rejected, why?”
• Non-expert B: “Because AI said you
don’t comply with XYZ”
• Non-expert A: “Is your AI system fair
and unbiased?”
• Non-expert B: “Yes, our bank is part
of the EU Federation of Ethical
Banks”
32
A question about
epistemology of AI
Non-expert cannot give
details about process,
only output
A question about ethics
of AI
Non experts answers
epistemological and
ethical questions with
instititutionalization
In case of epistemic symmetry between non-experts epistemological
and ethical questions are answered appealing to axiology and
institutionalization: the non-experts trusts that the process complies
with institutionalized standards
FROM SIMPLIFIED SCENARIOS TO REAL SCENARIOS
• How to make ‘ethics-cum-epistemology’ normative
• Requests of ethical compliance have to be anticipated with clear and accessible coding
documentation
• High standards on ethics are not a compromise on e.g. on efficiency, but a positive
stance about e.g. fairness and transparency
• Kearns & Roth: a trade-off
• Russo-Schliesser-Wagemans: value-promoting
• Easier said than done, many questions about the governance of ‘ethical XAI’ still need
to be addressed, e.g.:
• Should we aim for ‘more institutionalization’ as safeguard?
• What could be a better use of ethics forms and guidelines?
33
TO SUM UP AND CONCLUDE
34
EPISTEMOLOGICAL AND NORMATIVE
• It is high time that epistemological and normative questions are considered
together, rather than separately
• To develop an ethics-cum-epistemology, we shift focus from the outcome to the
whole process
• At each stage of the whole process, normative and epistemic questions have to
be considered
• Ethics is continuous assessment, rather than post-hoc
35
Epistemological
Queries
Normative
Queries
ARGUMENTS FROM EXPERT OPINION AND AI
• With an ethics-cum-epistemology, and with the aid of argumentation theory, we
account for situations of epistemic symmetry and asymmetry
• In epistemic symmetry, both epistemological and normative questions can be
answered at technical level
• In epistemic asymmetry, axiology and institutionalization help address both
epistemological and normative questions
36
Expert
Non-expert
CONNECTING THE ETHICS
AND EPISTEMOLOGY OF AI
FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS
UNIVERSITY OF AMSTERDAM
@FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
Thanks for your attention

More Related Content

Similar to Connecting the epistemology and ethics of AI

Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...
Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...
Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...
Lionel Briand
 
Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
Petar Radanliev
 
A_future_perspective_-_N_Harding
A_future_perspective_-_N_HardingA_future_perspective_-_N_Harding
A_future_perspective_-_N_Harding
Nial Harding
 

Similar to Connecting the epistemology and ethics of AI (20)

Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
Advancing Testing Using Axioms
Advancing Testing Using AxiomsAdvancing Testing Using Axioms
Advancing Testing Using Axioms
 
Research Operations at Scale (Christian Rohrer at DesignOps Summit 2017)
Research Operations at Scale (Christian Rohrer at DesignOps Summit 2017)Research Operations at Scale (Christian Rohrer at DesignOps Summit 2017)
Research Operations at Scale (Christian Rohrer at DesignOps Summit 2017)
 
Technology for everyone - AI ethics and Bias
Technology for everyone - AI ethics and BiasTechnology for everyone - AI ethics and Bias
Technology for everyone - AI ethics and Bias
 
Why i hate digital forensics - draft
Why i hate digital forensics  -  draftWhy i hate digital forensics  -  draft
Why i hate digital forensics - draft
 
Ian Cameron
Ian CameronIan Cameron
Ian Cameron
 
Artificial Intelligence by B. Ravikumar
Artificial Intelligence by B. RavikumarArtificial Intelligence by B. Ravikumar
Artificial Intelligence by B. Ravikumar
 
Applying AI to software engineering problems: Do not forget the human!
Applying AI to software engineering problems: Do not forget the human!Applying AI to software engineering problems: Do not forget the human!
Applying AI to software engineering problems: Do not forget the human!
 
Will Robots Replace Testers?
Will Robots Replace Testers?Will Robots Replace Testers?
Will Robots Replace Testers?
 
Simulation in Social Sciences - Lecture 6 in Introduction to Computational S...
Simulation in Social Sciences -  Lecture 6 in Introduction to Computational S...Simulation in Social Sciences -  Lecture 6 in Introduction to Computational S...
Simulation in Social Sciences - Lecture 6 in Introduction to Computational S...
 
AI Ethics presentation (Mentorship program).pptx
AI Ethics presentation (Mentorship program).pptxAI Ethics presentation (Mentorship program).pptx
AI Ethics presentation (Mentorship program).pptx
 
Data-X-Sparse-v2
Data-X-Sparse-v2Data-X-Sparse-v2
Data-X-Sparse-v2
 
Advancing Testing Using Axioms
Advancing Testing Using AxiomsAdvancing Testing Using Axioms
Advancing Testing Using Axioms
 
Big Data LDN 2017: Preserving The Key Principles Of Academic Research In A Bu...
Big Data LDN 2017: Preserving The Key Principles Of Academic Research In A Bu...Big Data LDN 2017: Preserving The Key Principles Of Academic Research In A Bu...
Big Data LDN 2017: Preserving The Key Principles Of Academic Research In A Bu...
 
Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...
Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...
Mathematicians, Social Scientists, or Engineers? The Split Minds of Software ...
 
Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
 
Empirical Software Engineering
Empirical Software EngineeringEmpirical Software Engineering
Empirical Software Engineering
 
A_future_perspective_-_N_Harding
A_future_perspective_-_N_HardingA_future_perspective_-_N_Harding
A_future_perspective_-_N_Harding
 
'A critique of testing' UK TMF forum January 2015
'A critique of testing' UK TMF forum January 2015 'A critique of testing' UK TMF forum January 2015
'A critique of testing' UK TMF forum January 2015
 
Data-X-v3.1
Data-X-v3.1Data-X-v3.1
Data-X-v3.1
 

More from University of Amsterdam and University College London

More from University of Amsterdam and University College London (20)

H-AI-BRID - Thinking and designing Human-AI systems
H-AI-BRID - Thinking and designing Human-AI systemsH-AI-BRID - Thinking and designing Human-AI systems
H-AI-BRID - Thinking and designing Human-AI systems
 
Time in QCA: a philosopher’s perspective
Time in QCA: a philosopher’s perspectiveTime in QCA: a philosopher’s perspective
Time in QCA: a philosopher’s perspective
 
Interconnected health-environmental challenges: Between the implosion of the ...
Interconnected health-environmental challenges: Between the implosion of the ...Interconnected health-environmental challenges: Between the implosion of the ...
Interconnected health-environmental challenges: Between the implosion of the ...
 
Trusting AI-generated contents: a techno-scientific approach
Trusting AI-generated contents: a techno-scientific approachTrusting AI-generated contents: a techno-scientific approach
Trusting AI-generated contents: a techno-scientific approach
 
Interconnected health-environmental challenges, Health and the Environment: c...
Interconnected health-environmental challenges, Health and the Environment: c...Interconnected health-environmental challenges, Health and the Environment: c...
Interconnected health-environmental challenges, Health and the Environment: c...
 
Who Needs “Philosophy of Techno- Science”?
Who Needs “Philosophy of Techno- Science”?Who Needs “Philosophy of Techno- Science”?
Who Needs “Philosophy of Techno- Science”?
 
Philosophy of Techno-Science: Whence and Whither
Philosophy of Techno-Science: Whence and WhitherPhilosophy of Techno-Science: Whence and Whither
Philosophy of Techno-Science: Whence and Whither
 
Charting the explanatory potential of network models/network modeling in psyc...
Charting the explanatory potential of network models/network modeling in psyc...Charting the explanatory potential of network models/network modeling in psyc...
Charting the explanatory potential of network models/network modeling in psyc...
 
The implosion of medical evidence: emerging approaches for diverse practices ...
The implosion of medical evidence: emerging approaches for diverse practices ...The implosion of medical evidence: emerging approaches for diverse practices ...
The implosion of medical evidence: emerging approaches for diverse practices ...
 
On the epistemic and normative benefits of methodological pluralism
On the epistemic and normative benefits of methodological pluralismOn the epistemic and normative benefits of methodological pluralism
On the epistemic and normative benefits of methodological pluralism
 
Socio-markers and information transmission
Socio-markers and information transmissionSocio-markers and information transmission
Socio-markers and information transmission
 
Disease causation and public health interventions
Disease causation and public health interventionsDisease causation and public health interventions
Disease causation and public health interventions
 
The life-world of health and disease and the design of public health interven...
The life-world of health and disease and the design of public health interven...The life-world of health and disease and the design of public health interven...
The life-world of health and disease and the design of public health interven...
 
Value-promoting concepts in the health sciences and public health
Value-promoting concepts in the health sciences and public healthValue-promoting concepts in the health sciences and public health
Value-promoting concepts in the health sciences and public health
 
How is Who. Empowering evidence for sustainability and public health interven...
How is Who. Empowering evidence for sustainability and public health interven...How is Who. Empowering evidence for sustainability and public health interven...
How is Who. Empowering evidence for sustainability and public health interven...
 
High technologized justice – The road map for policy & regulation. Legaltech ...
High technologized justice – The road map for policy & regulation. Legaltech ...High technologized justice – The road map for policy & regulation. Legaltech ...
High technologized justice – The road map for policy & regulation. Legaltech ...
 
Science and values. A two-way relations
Science and values. A two-way relationsScience and values. A two-way relations
Science and values. A two-way relations
 
Causal pluralism and public health
Causal pluralism and public healthCausal pluralism and public health
Causal pluralism and public health
 
Causal pluralism in public health
Causal pluralism in public healthCausal pluralism in public health
Causal pluralism in public health
 
How is Who. Evidence as clues for action in participatory approaches.
How is Who. Evidence as clues for action in participatory approaches.How is Who. Evidence as clues for action in participatory approaches.
How is Who. Evidence as clues for action in participatory approaches.
 

Recently uploaded

會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
中 央社
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code Examples
Peter Brusilovsky
 

Recently uploaded (20)

Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community PartnershipsSpring gala 2024 photo slideshow - Celebrating School-Community Partnerships
Spring gala 2024 photo slideshow - Celebrating School-Community Partnerships
 
Envelope of Discrepancy in Orthodontics: Enhancing Precision in Treatment
 Envelope of Discrepancy in Orthodontics: Enhancing Precision in Treatment Envelope of Discrepancy in Orthodontics: Enhancing Precision in Treatment
Envelope of Discrepancy in Orthodontics: Enhancing Precision in Treatment
 
Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"
 
Championnat de France de Tennis de table/
Championnat de France de Tennis de table/Championnat de France de Tennis de table/
Championnat de France de Tennis de table/
 
“O BEIJO” EM ARTE .
“O BEIJO” EM ARTE                       .“O BEIJO” EM ARTE                       .
“O BEIJO” EM ARTE .
 
Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 
philosophy and it's principles based on the life
philosophy and it's principles based on the lifephilosophy and it's principles based on the life
philosophy and it's principles based on the life
 
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...When Quality Assurance Meets Innovation in Higher Education - Report launch w...
When Quality Assurance Meets Innovation in Higher Education - Report launch w...
 
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文會考英文
 
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
 
male presentation...pdf.................
male presentation...pdf.................male presentation...pdf.................
male presentation...pdf.................
 
An overview of the various scriptures in Hinduism
An overview of the various scriptures in HinduismAn overview of the various scriptures in Hinduism
An overview of the various scriptures in Hinduism
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code Examples
 
How to Analyse Profit of a Sales Order in Odoo 17
How to Analyse Profit of a Sales Order in Odoo 17How to Analyse Profit of a Sales Order in Odoo 17
How to Analyse Profit of a Sales Order in Odoo 17
 
Mattingly "AI & Prompt Design: Named Entity Recognition"
Mattingly "AI & Prompt Design: Named Entity Recognition"Mattingly "AI & Prompt Design: Named Entity Recognition"
Mattingly "AI & Prompt Design: Named Entity Recognition"
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptx
 
Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...
Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...
Basic Civil Engineering notes on Transportation Engineering, Modes of Transpo...
 
How to Manage Closest Location in Odoo 17 Inventory
How to Manage Closest Location in Odoo 17 InventoryHow to Manage Closest Location in Odoo 17 Inventory
How to Manage Closest Location in Odoo 17 Inventory
 
The Liver & Gallbladder (Anatomy & Physiology).pptx
The Liver &  Gallbladder (Anatomy & Physiology).pptxThe Liver &  Gallbladder (Anatomy & Physiology).pptx
The Liver & Gallbladder (Anatomy & Physiology).pptx
 
UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024
 

Connecting the epistemology and ethics of AI

  • 1. CONNECTING THE ETHICS AND EPISTEMOLOGY OF AI FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS UNIVERSITY OF AMSTERDAM @FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
  • 2. OUTLINE ● From Ethics aut Epistemology to Ethics cum Epistemology ○ Disconnected projects ○ Ethics as a post-hoc assessment ○ Shifting focus from output to process ○ Ethics as continuous assessment, from design to use ● What can XAI learn from argumentation theory? o A crash course on arguments from expert opinion o 4 simplified scenarios o A normative stance for real scenarios 2
  • 3. FROM ETHICS AUT EPISTEMOLOGY TO ETHICS CUM EPISTEMOLOGY 3
  • 4. DISCONNECTED PROJECTS • Disconnected projects: • [Ethics] Questions of how to make AI ethically compliant, ensuring that algorithms are as fair as possible and as unbiased as possible. • [Epistemology] Questions of transparency / opacity of AI, i.e. , AI as a glass or opaque box. • Our approach: • Not whether there is an intrinsic value in XAI, but how questions of epistemology bear on ethics, and vice-versa • Broader than value-sensitive design, we care about the whole process from design to use, and considering multiple actors • ‘Ethics’ as shorthand for ‘axiology’: values as to include social aims etc 4
  • 5. ETHICS AS POST HOC ASSESSMENT • AI raises important ethical concerns, therefore we need to produce suitable mechanisms: • To audit ethics compliance • To verify responsibility and accountability • A number of excellent protocols exist, and they are valuable • Yet, some scholars criticize Ethics of AI for being much ‘window-dressing’ • Rather, we aim to contribute to the ‘scaffolding’ 5 Post-hoc assessment
  • 6. ‘STAND ALONE’ EPISTEMOLOGY • A vast, rich, fast-growing debate on epistemology of AI • When / how is an AI reliable, trustworthy? Hence, under which conditions can we trust the outcome of an AI • Lots hinges on definition of transparency | opacity | accuracy | explainability |… • With wide agreement that most AI systems are opaque • So, how can we trust outcomes of opaque AI? • But the whole debate is orthogonal to ethics concerns 6
  • 8. COMPUTATIONAL RELIABILISM: WHAT SHOULD WE TRUST? (CR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive agent, p is any truth‐valued proposition related to the results of a computer simulation, t is any given time, and m is a reliable computer simulation. (Durán, Durán & Formanek) • Not ‘more transparency’, but focus on process that makes output reliable • CR indicators: verification and validation methods; robustness analysis; a history of (un)successful implementations; expert knowledge • We build on CR to: • Include values in CR more explicitly • Re-enter considerations about transparency • Include more actors explicitly 8
  • 9. ETHICAL COMPUTATIONAL RELIABILISM (ECR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive agent, p is any truth-valued proposition related to the results of an AI, t is any given time, and m is a reliable algorithmic mediation without (intentionally) generating foreseeable asymmetric harm patterns to vulnerable populations. • We need to make purpose explicit • One purpose is to not intentionally harm vulnerable populations • Ex-ante ethical assessment is key • It may raise costs at the beginning, but reduce litigation costs later • ECR is no magic bullet • Prevention, accountability, and remedy of some unforeseeable asymmetric harm patterns may still be outside ECR • We need complementary design, assessment, regulatory mechanisms in place, and at different levels 9
  • 10. RE-INTRODUCING TRANSPARENCY • Creel’s 3 types of transparency: • Functional: “knowledge of the algorithmic functioning of the whole” • Structural: “knowledge of how the algorithm was realized in code” • Run: “knowledge of the program as it was actually run in a particular instance, including the hardware and input data used” • Creel’s 3 types of transparencies help us: • Focus on epistemology • Introducing actors explicitly: transparency for whom? • Hinted at in Creel, but not developed in her work 10
  • 11. HOLISTIC MODEL VALIDATION An epistemology for glass-box AI 11
  • 12. PLAN: PROCESS + VALUES + ACTORS • We build on CR and on Creel’s account of transparency: • Look at the whole process (=design, implementation, use) before the outcome • Consider which values enter at each stage and how • Consider how different actors answer differently to epistemological and ethics queries 12
  • 13. LESSONS FROM PHIL SCI ON MODEL VALIDATION • ‘Model validation’ in a restricted CS sense = adequacy of the model with respect to empirical data • Model validation is a broader Phil Sci sense = how to trust the whole process? • Formulation of research hypothesis, selection of background knowledge and theory, interpretation of results, possibly use (e.g. in policy), … • Algorithmic procedures are a case in point, not special with respect to other modelling strategies in science & technology 13
  • 14. WHERE IS ETHICS IN MODEL VALIDATION? • At each and every point of the process we can (should) make considerations about • Epistemology > transparency, explainability, validation/verification, … • Ethics > which values are operationalized? What is intended? What is foreseeable? How? • We agree with Kearns & Roth: values can be operationalised • Unlike Kearns & Roth, it is not a trade-off, but it is a design choice, proper 14
  • 15. ETHICS AS CONTINUOUS ASSESSMENT • Ethical considerations have to be raised • Already at the design stage • Throughout the whole process • And in combination with epistemological / technical considerations • Epistemology-cum-Ethics: the way forward for XAI • We care about the role of designers, programmers, engineers and other actors too • Holistic model validation and glass-box epistemology ensure the possibility of inspecting the system at any time 15
  • 16. SYNERGIES • Ethics-cum-Epistemology complements existing approaches: • Ethics auditing (post-hoc), see e.g. Mokander & Floridi • Ethics training, see e.g. Bezuidenhout & Ratti • [We definitively need a solid legal framework too] 16
  • 17. TO RECAP THE ARGUMENT SO FAR • To trust an outcome we need to look at the process • We expand CR into ECR, ‘re-inject’ 3 types of transparency, enlarge it to ‘holistic model validation’ • With ‘holistic model validation’ we claim that values enter at each and every step of the process • This is how we connect epistemology and ethics • Next question: who can assess the process and how? 17
  • 18. XAI AND ARGUMENTS FROM EXPERT OPINION 18
  • 20. DISCLAIMERS • We are aware that expertise • Is not binary, but has shades of gray • Can overlap across experts, and across groups of experts • Can be ascribed to non-human agents too • For simplicity, we • Confine the discussion to human experts • Consider that expertise concerns ‘technical features’ of an AI system, and that • Actors have or do not have such expertise 20
  • 21. NOTATION: EPISTEMIC SYMMETRY EPISTEMOLOGICAL QUERIES • Expert A: Can I trust output of algorithm G? • Expert B: Yes. Look at technical features XYZ. NORMATIVE QUERIES • Expert A: Is the algorithm G fair? • Expert B: Yes. Look at technical features XYZ. 21 Experts A, B have equal or comparable expertise
  • 22. NOTATION: EPISTEMIC ASYMMETRY EPISTEMOLOGICAL QUERIES • Non-expert: Can I trust output of algorithm G? • Expert: Yes. You can trust my expertise in designing and implementing technical features XYZ. NORMATIVE QUERIES • Non-expert: Is algorithm G fair? • Expert: Yes. You can trust I comply with ethics requirements, as mandated by institution Y. 22 Non-expert cannot assess technical details, s/he has to trust Expert
  • 23. Can we accept arguments like these? How?
  • 24. LEARNING FROM ARGUMENTATION THEORY ARGUMENT FROM EXPERT OPINION p is true, because p is said by expert E POSSIBLE CRITICAL QUESTIONS Is E really an expert about p? Did expert E really said p? Do other experts agree? Are there other interests at play? 24 Check which form of institutionalization guarantees trusting the source of expertise Check contents p said by E Confront p with other expert opinions Check other institutional guarantees
  • 25. SIMPLIFIED SCENARIO 1: EPISTEMIC SYMMETRY OF EXPERTS • Expert A: “How did you get to result X?” • Expert B: “Because the system is designed such-and-such” • Expert A: “Is your AI system fair and transparent?” • B: “Yes, I operationalized concepts XYZ in such-and-such way” 25 A question about epistemology of AI Expert B gives technical details about AI system A question about ethics of AI Expert B gives technical details about how AI system is ethical
  • 26. SIMPLIFIED SCENARIO 1: EPISTEMIC SYMMETRY OF EXPERTS • Expert A: “How did you get to result X?” • Expert B: “Because the system is designed such-and-such” • Expert A: “Is your AI system fair and transparent?” • B: “Yes, I operationalized concepts XYZ in such-and-such way” 26 A question about epistemology of AI Expert B gives technical details about AI system A question about ethics of AI Expert B gives technical details about how AI system is ethical In case of epistemic symmetry between experts, both epistemological and ethical questions can be answered with technical details of AI
  • 27. SIMPLIFIED SCENARIO 2A: EPISTEMIC ASYMMETRY • Non-expert: “I am diagnosed with disease X, why?” • Expert: “Because AI said you are in reference class XYZ” • Non-expert: “Is your AI fair and unbiased?” • B: “Yes, I operationalized XYZ in such-and-such way” 27 A question about epistemology AI Expert’s technical answer is meaningless to non- expert As non-expert, if you can’t grasp epistemology, you inquiry about axiology Expert’s technical answer is meaningless to non- expert
  • 28. SIMPLIFIED SCENARIO 2A: EPISTEMIC ASYMMETRY • Non-expert: “I am diagnosed with disease X, why?” • Expert: “Because AI said you are in reference class XYZ” • Non-expert: “Is your AI fair and unbiased?” • B: “Yes, I operationalized XYZ in such-and-such way” 28 A question about epistemology AI Expert’s technical answer is meaningless to non- expert As non-expert, if you can’t grasp epistemology, you inquiry about axiology Expert’s technical answer is meaningless to non- expert In case of epistemic Asymmetry between experts, Non-expert cannot grasp both epistemological and ethical issues with technical details of AI
  • 29. SIMPLIFIED SCENARIO 2B: EPISTEMIC ASYMMETRY • Non-expert: “I am diagnosed with disease X, why?” • Expert: “Because the system said you are in reference class XYZ” • Non-expert: “Is your AI system fair and transparent?” • Expert: “Yes, our research and algorithms comply with standards and codes of conduct XYZ” 29 A question about epistemology AI Expert’s technical answer is meaningless to non- expert As non-expert, if you can’t grasp epistemology, you inquiry about axiology Expert’s answer appeals to axiology + institutionalization
  • 30. SIMPLIFIED SCENARIO 2B: EPISTEMIC ASYMMETRY • Non-expert: “I am diagnosed with disease X, why?” • Expert: “Because the system said you are in reference class XYZ” • Non-expert: “Is your AI system fair and transparent?” • Expert: “Yes, our research and algorithms comply with standards and codes of conduct XYZ” 30 A question about epistemology AI Expert’s technical answer is meaningless to non- expert As non-expert, if you can’t grasp epistemology, you inquiry about axiology Expert’s answer appeals to axiology + institutionalization In case of epistemic Asymmetry, both epistemological and ethical questions are answered appealing to axiology and institutionalization: the non- experts trusts that the process complies with institutionalized standards
  • 31. SIMPLIFIED SCENARIO 3: EPISTEMIC SYMMETRY OF NON-EXPERTS • Non-expert A: “My request for a loan was rejected, why?” • Non-expert B: “Because AI said you don’t comply with XYZ” • Non-expert A: “Is your AI system fair and unbiased?” • Non-expert B: “Yes, our bank is part of the EU Federation of Ethical Banks” 31 A question about epistemology of AI Non-expert cannot give details about process, only output A question about ethics of AI Non experts answers epistemological and ethical questions with instititutionalization
  • 32. SIMPLIFIED SCENARIO 3: EPISTEMIC SYMMETRY OF NON-EXPERTS • Non-expert A: “My request for a loan was rejected, why?” • Non-expert B: “Because AI said you don’t comply with XYZ” • Non-expert A: “Is your AI system fair and unbiased?” • Non-expert B: “Yes, our bank is part of the EU Federation of Ethical Banks” 32 A question about epistemology of AI Non-expert cannot give details about process, only output A question about ethics of AI Non experts answers epistemological and ethical questions with instititutionalization In case of epistemic symmetry between non-experts epistemological and ethical questions are answered appealing to axiology and institutionalization: the non-experts trusts that the process complies with institutionalized standards
  • 33. FROM SIMPLIFIED SCENARIOS TO REAL SCENARIOS • How to make ‘ethics-cum-epistemology’ normative • Requests of ethical compliance have to be anticipated with clear and accessible coding documentation • High standards on ethics are not a compromise on e.g. on efficiency, but a positive stance about e.g. fairness and transparency • Kearns & Roth: a trade-off • Russo-Schliesser-Wagemans: value-promoting • Easier said than done, many questions about the governance of ‘ethical XAI’ still need to be addressed, e.g.: • Should we aim for ‘more institutionalization’ as safeguard? • What could be a better use of ethics forms and guidelines? 33
  • 34. TO SUM UP AND CONCLUDE 34
  • 35. EPISTEMOLOGICAL AND NORMATIVE • It is high time that epistemological and normative questions are considered together, rather than separately • To develop an ethics-cum-epistemology, we shift focus from the outcome to the whole process • At each stage of the whole process, normative and epistemic questions have to be considered • Ethics is continuous assessment, rather than post-hoc 35 Epistemological Queries Normative Queries
  • 36. ARGUMENTS FROM EXPERT OPINION AND AI • With an ethics-cum-epistemology, and with the aid of argumentation theory, we account for situations of epistemic symmetry and asymmetry • In epistemic symmetry, both epistemological and normative questions can be answered at technical level • In epistemic asymmetry, axiology and institutionalization help address both epistemological and normative questions 36 Expert Non-expert
  • 37. CONNECTING THE ETHICS AND EPISTEMOLOGY OF AI FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS UNIVERSITY OF AMSTERDAM @FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS Thanks for your attention

Editor's Notes

  1. Thanks Honoured The seed money project and the collaboration Our respective expertise, hear different voices at different moments