Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Connecting the epistemology and ethics of AI
1. CONNECTING THE ETHICS
AND EPISTEMOLOGY OF AI
FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS
UNIVERSITY OF AMSTERDAM
@FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
2. OUTLINE
● From Ethics aut Epistemology to Ethics cum Epistemology
○ Disconnected projects
○ Ethics as a post-hoc assessment
○ Shifting focus from output to process
○ Ethics as continuous assessment, from design to use
● What can XAI learn from argumentation theory?
o A crash course on arguments from expert opinion
o 4 simplified scenarios
o A normative stance for real scenarios
2
4. DISCONNECTED PROJECTS
• Disconnected projects:
• [Ethics] Questions of how to make AI ethically compliant, ensuring that algorithms are as fair as
possible and as unbiased as possible.
• [Epistemology] Questions of transparency / opacity of AI, i.e. , AI as a glass or opaque box.
• Our approach:
• Not whether there is an intrinsic value in XAI, but how questions of epistemology bear on ethics,
and vice-versa
• Broader than value-sensitive design, we care about the whole process from design to use, and
considering multiple actors
• ‘Ethics’ as shorthand for ‘axiology’: values as to include social aims etc
4
5. ETHICS AS POST HOC ASSESSMENT
• AI raises important ethical concerns, therefore we need to produce suitable
mechanisms:
• To audit ethics compliance
• To verify responsibility and accountability
• A number of excellent protocols exist, and they are valuable
• Yet, some scholars criticize Ethics of AI for being much ‘window-dressing’
• Rather, we aim to contribute to the ‘scaffolding’
5
Post-hoc assessment
6. ‘STAND ALONE’ EPISTEMOLOGY
• A vast, rich, fast-growing debate on epistemology of AI
• When / how is an AI reliable, trustworthy? Hence, under which conditions can we trust
the outcome of an AI
• Lots hinges on definition of transparency | opacity | accuracy | explainability |…
• With wide agreement that most AI systems are opaque
• So, how can we trust outcomes of opaque AI?
• But the whole debate is orthogonal to ethics concerns
6
8. COMPUTATIONAL RELIABILISM:
WHAT SHOULD WE TRUST?
(CR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive agent,
p is any truth‐valued proposition related to the results of a computer simulation, t is any given time, and
m is a reliable computer simulation. (Durán, Durán & Formanek)
• Not ‘more transparency’, but focus on process that makes output reliable
• CR indicators: verification and validation methods; robustness analysis; a history of (un)successful
implementations; expert knowledge
• We build on CR to:
• Include values in CR more explicitly
• Re-enter considerations about transparency
• Include more actors explicitly
8
9. ETHICAL COMPUTATIONAL RELIABILISM
(ECR) if S’s believing p at t results from m, then S’s belief in p at t is justified. where S is a cognitive
agent, p is any truth-valued proposition related to the results of an AI, t is any given time, and m is a
reliable algorithmic mediation without (intentionally) generating foreseeable asymmetric harm patterns
to vulnerable populations.
• We need to make purpose explicit
• One purpose is to not intentionally harm vulnerable populations
• Ex-ante ethical assessment is key
• It may raise costs at the beginning, but reduce litigation costs later
• ECR is no magic bullet
• Prevention, accountability, and remedy of some unforeseeable asymmetric harm patterns may still be outside ECR
• We need complementary design, assessment, regulatory mechanisms in place, and at different levels
9
10. RE-INTRODUCING TRANSPARENCY
• Creel’s 3 types of transparency:
• Functional: “knowledge of the algorithmic functioning of the whole”
• Structural: “knowledge of how the algorithm was realized in code”
• Run: “knowledge of the program as it was actually run in a particular instance,
including the hardware and input data used”
• Creel’s 3 types of transparencies help us:
• Focus on epistemology
• Introducing actors explicitly: transparency for whom?
• Hinted at in Creel, but not developed in her work
10
12. PLAN: PROCESS + VALUES + ACTORS
• We build on CR and on Creel’s account of transparency:
• Look at the whole process (=design, implementation, use) before the outcome
• Consider which values enter at each stage and how
• Consider how different actors answer differently to epistemological and ethics queries
12
13. LESSONS FROM PHIL SCI ON MODEL VALIDATION
• ‘Model validation’ in a restricted CS sense = adequacy of the model with respect
to empirical data
• Model validation is a broader Phil Sci sense = how to trust the whole process?
• Formulation of research hypothesis, selection of background knowledge and theory,
interpretation of results, possibly use (e.g. in policy), …
• Algorithmic procedures are a case in point, not special with respect to other
modelling strategies in science & technology
13
14. WHERE IS ETHICS IN MODEL VALIDATION?
• At each and every point of the process we can (should) make considerations
about
• Epistemology > transparency, explainability, validation/verification, …
• Ethics > which values are operationalized? What is intended? What is foreseeable?
How?
• We agree with Kearns & Roth: values can be operationalised
• Unlike Kearns & Roth, it is not a trade-off, but it is a design choice, proper
14
15. ETHICS AS CONTINUOUS ASSESSMENT
• Ethical considerations have to be raised
• Already at the design stage
• Throughout the whole process
• And in combination with epistemological / technical considerations
• Epistemology-cum-Ethics: the way forward for XAI
• We care about the role of designers, programmers, engineers and other actors too
• Holistic model validation and glass-box epistemology ensure the possibility of inspecting the
system at any time
15
16. SYNERGIES
• Ethics-cum-Epistemology complements existing approaches:
• Ethics auditing (post-hoc), see e.g. Mokander & Floridi
• Ethics training, see e.g. Bezuidenhout & Ratti
• [We definitively need a solid legal framework too]
16
17. TO RECAP THE ARGUMENT SO FAR
• To trust an outcome we need to look at the process
• We expand CR into ECR, ‘re-inject’ 3 types of transparency, enlarge it to ‘holistic model
validation’
• With ‘holistic model validation’ we claim that values enter at each and every step of the
process
• This is how we connect epistemology and ethics
• Next question: who can assess the process and how?
17
20. DISCLAIMERS
• We are aware that expertise
• Is not binary, but has shades of gray
• Can overlap across experts, and across groups of experts
• Can be ascribed to non-human agents too
• For simplicity, we
• Confine the discussion to human experts
• Consider that expertise concerns ‘technical features’ of an AI system, and that
• Actors have or do not have such expertise
20
21. NOTATION: EPISTEMIC SYMMETRY
EPISTEMOLOGICAL QUERIES
• Expert A: Can I trust output of
algorithm G?
• Expert B: Yes. Look at technical
features XYZ.
NORMATIVE QUERIES
• Expert A: Is the algorithm G fair?
• Expert B: Yes. Look at technical
features XYZ.
21
Experts A, B have equal or comparable expertise
22. NOTATION: EPISTEMIC ASYMMETRY
EPISTEMOLOGICAL QUERIES
• Non-expert: Can I trust output of
algorithm G?
• Expert: Yes. You can trust my
expertise in designing and
implementing technical features XYZ.
NORMATIVE QUERIES
• Non-expert: Is algorithm G fair?
• Expert: Yes. You can trust I comply
with ethics requirements, as
mandated by institution Y.
22
Non-expert cannot assess technical details, s/he has to trust Expert
24. LEARNING FROM ARGUMENTATION THEORY
ARGUMENT FROM EXPERT OPINION
p is true, because p is said by expert E
POSSIBLE CRITICAL QUESTIONS
Is E really an expert about p?
Did expert E really said p?
Do other experts agree?
Are there other interests at play?
24
Check which form of institutionalization
guarantees trusting the source of
expertise
Check contents p said by E
Confront p with other expert opinions
Check other institutional guarantees
25. SIMPLIFIED SCENARIO 1:
EPISTEMIC SYMMETRY OF EXPERTS
• Expert A: “How did you get to result
X?”
• Expert B: “Because the system is
designed such-and-such”
• Expert A: “Is your AI system fair and
transparent?”
• B: “Yes, I operationalized concepts
XYZ in such-and-such way”
25
A question about
epistemology of AI
Expert B gives technical
details about AI system
A question about ethics
of AI
Expert B gives technical
details about how AI
system is ethical
26. SIMPLIFIED SCENARIO 1:
EPISTEMIC SYMMETRY OF EXPERTS
• Expert A: “How did you get to result
X?”
• Expert B: “Because the system is
designed such-and-such”
• Expert A: “Is your AI system fair and
transparent?”
• B: “Yes, I operationalized concepts
XYZ in such-and-such way”
26
A question about
epistemology of AI
Expert B gives technical
details about AI system
A question about ethics
of AI
Expert B gives technical
details about how AI
system is ethical
In case of epistemic symmetry between experts, both epistemological
and ethical questions can be answered with technical details of AI
27. SIMPLIFIED SCENARIO 2A:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because AI said you are in
reference class XYZ”
• Non-expert: “Is your AI fair and
unbiased?”
• B: “Yes, I operationalized XYZ in
such-and-such way”
27
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s technical answer
is meaningless to non-
expert
28. SIMPLIFIED SCENARIO 2A:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because AI said you are in
reference class XYZ”
• Non-expert: “Is your AI fair and
unbiased?”
• B: “Yes, I operationalized XYZ in
such-and-such way”
28
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s technical answer
is meaningless to non-
expert
In case of epistemic Asymmetry between experts,
Non-expert cannot grasp both epistemological and ethical issues
with technical details of AI
29. SIMPLIFIED SCENARIO 2B:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because the system said you
are in reference class XYZ”
• Non-expert: “Is your AI system fair and
transparent?”
• Expert: “Yes, our research and
algorithms comply with standards and
codes of conduct XYZ”
29
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s answer appeals
to axiology +
institutionalization
30. SIMPLIFIED SCENARIO 2B:
EPISTEMIC ASYMMETRY
• Non-expert: “I am diagnosed with
disease X, why?”
• Expert: “Because the system said you
are in reference class XYZ”
• Non-expert: “Is your AI system fair and
transparent?”
• Expert: “Yes, our research and
algorithms comply with standards and
codes of conduct XYZ”
30
A question about
epistemology AI
Expert’s technical answer
is meaningless to non-
expert
As non-expert, if you
can’t grasp
epistemology, you
inquiry about axiology
Expert’s answer appeals
to axiology +
institutionalization
In case of epistemic Asymmetry, both epistemological and ethical questions
are answered appealing to axiology and institutionalization: the non-
experts trusts that the process complies with institutionalized standards
31. SIMPLIFIED SCENARIO 3:
EPISTEMIC SYMMETRY OF NON-EXPERTS
• Non-expert A: “My request for a loan
was rejected, why?”
• Non-expert B: “Because AI said you
don’t comply with XYZ”
• Non-expert A: “Is your AI system fair
and unbiased?”
• Non-expert B: “Yes, our bank is part
of the EU Federation of Ethical
Banks”
31
A question about
epistemology of AI
Non-expert cannot give
details about process,
only output
A question about ethics
of AI
Non experts answers
epistemological and
ethical questions with
instititutionalization
32. SIMPLIFIED SCENARIO 3:
EPISTEMIC SYMMETRY OF NON-EXPERTS
• Non-expert A: “My request for a loan
was rejected, why?”
• Non-expert B: “Because AI said you
don’t comply with XYZ”
• Non-expert A: “Is your AI system fair
and unbiased?”
• Non-expert B: “Yes, our bank is part
of the EU Federation of Ethical
Banks”
32
A question about
epistemology of AI
Non-expert cannot give
details about process,
only output
A question about ethics
of AI
Non experts answers
epistemological and
ethical questions with
instititutionalization
In case of epistemic symmetry between non-experts epistemological
and ethical questions are answered appealing to axiology and
institutionalization: the non-experts trusts that the process complies
with institutionalized standards
33. FROM SIMPLIFIED SCENARIOS TO REAL SCENARIOS
• How to make ‘ethics-cum-epistemology’ normative
• Requests of ethical compliance have to be anticipated with clear and accessible coding
documentation
• High standards on ethics are not a compromise on e.g. on efficiency, but a positive
stance about e.g. fairness and transparency
• Kearns & Roth: a trade-off
• Russo-Schliesser-Wagemans: value-promoting
• Easier said than done, many questions about the governance of ‘ethical XAI’ still need
to be addressed, e.g.:
• Should we aim for ‘more institutionalization’ as safeguard?
• What could be a better use of ethics forms and guidelines?
33
35. EPISTEMOLOGICAL AND NORMATIVE
• It is high time that epistemological and normative questions are considered
together, rather than separately
• To develop an ethics-cum-epistemology, we shift focus from the outcome to the
whole process
• At each stage of the whole process, normative and epistemic questions have to
be considered
• Ethics is continuous assessment, rather than post-hoc
35
Epistemological
Queries
Normative
Queries
36. ARGUMENTS FROM EXPERT OPINION AND AI
• With an ethics-cum-epistemology, and with the aid of argumentation theory, we
account for situations of epistemic symmetry and asymmetry
• In epistemic symmetry, both epistemological and normative questions can be
answered at technical level
• In epistemic asymmetry, axiology and institutionalization help address both
epistemological and normative questions
36
Expert
Non-expert
37. CONNECTING THE ETHICS
AND EPISTEMOLOGY OF AI
FEDERICA RUSSO, ERIC SCHLIESSER, JEAN WAGEMANS
UNIVERSITY OF AMSTERDAM
@FEDERICARUSSO | @NESCIO13 | @JEANWAGEMANS
Thanks for your attention
Editor's Notes
Thanks
Honoured
The seed money project and the collaboration
Our respective expertise, hear different voices at different moments