SlideShare a Scribd company logo
1 of 40
Download to read offline
Data, algorithms and discrimination
Fairness in AI
daniele regoli
Big Data Lab @ Intesa Sanpaolo
daniele.regoli@intesasanpaolo.com
2022.01.27 Deep Learning Italia
milano torino
Intesa Sanpaolo
Big Data Lab
des
dev
dep
ign.
elop.
loy.
2016 2018 2021
The Team
30%
MATHEMATHICS
BIG DATA LAB
DATA SCIENTISTS: 49
TORINO
MILANO
25%
PHYSICS
25%
ENGINEERING
10%
FINANCE
10%
COMPUTER
SCIENCE
our numbers
5
49
we’ve been growing quite a lot
our mission
Build
Microservices Monitor
COLLABORATIONS
50+ Projects Completed
Why we talk about fairness in AI
The zoo of fairness metrics
A roadmap to fairness
Mitigation strategies
Error rate parities & the COMPAS debate
Why we talk about fairness in AI?
credits: Moritz Hardt
risks for
people
this can have a huge impact on people’s lives
e.g. Recruiting / Loans approval
ML could amplify and
perpetuate biases already
present in data, at large
scale
data as a social
mirror
ML could disgregard
minority groups,
effectively producing
bias even if absent in
the data
sample size
imbalances
risks for
companies
AI
Regulation
The zoo of fairness metrics
there are a lot of different definitions of
fairness, in general non compatible with one
another
FAIRNESS
observational
metrics
causality-
based metrics
group
fairness
individual
fairness
Demographic
fairness
Error rate
parity
assessing
fairness
ground truth,
e.g. repayment
of loans
model decision,
e.g. grant/don’t
grant loan
sensitive
feature(s), e.g.
citizenship,
gender
non-sensitive
feature(s), e.g.
income
Machine Learning
model
same percentage of loans granted to men and women
same error rates for men and women
DEMOGRAPHIC
PARITY
CONDITIONAL
DEMOGRAPHIC PARITY
given some characteristics, same percentage of loans
granted to men and women
EQUALITY OF ODDS
(some) group
fairness
metrics
independence
separation
FAIRNESS THROUGH
AWARENESS
similar individuals are given similar
decisions
FAIRNESS THROUGH
UNAWARENESS
model’s outcomes are functions of non-
sensitive features only
(some)
individual
fairness
metrics
"Fairness through
awareness“, Dwork, Cynthia,
et al. Proceedings of the
3rd innovations in
theoretical computer science
conference. 2012
the devil
is in the
details
group fairness vs individual fairness
domestic foreign
financial
status
loan
granted
loan
rejected
fairness through unawareness:
don’t use explicitely the sensitive variable
the devil
is in the
details
domestic foreign
demographic parity:
same acceptance rate for different groups
widely cited 4/5 rule
of the Uniform
Guidelines on Employee
Selection Procedures
group fairness vs individual fairness
financial
status
loan
granted
loan
rejected
Demographic parity makes
sense when you want to
enforce equality between
groups whose
distributions you deem
should be equal if there
were no bias
Fairness Through
Unawareness:
problems with correlations,
what variable are we
allowed to use?
Demographic
Parity
Conditional
Demographic
Parity
Fairness
Through
Unawareness
Suppression
information of
contained in
group individual
metrics
techniques
«The zoo of Fairness metrics
in Machine Learning»,
Castelnovo, Crupi, Greco,
Regoli, preprint 2021
fairness
metrics
landscape
Error rate parities & the COMPAS debate
the devil
is in the
details
The COMPAS debate: error rate parities are not all
the same
White Black
precision = 75%
score
re-offend
does not
re-offend
precision = 75%
Predictive
Parity ~
the devil
is in the
details
The COMPAS debate: error rate parities are not all
the same
White Black
score
recall = 60% recall = 86%
re-offend
does not
re-offend
Equality of
opportunity ~
Impossibility
Theorem
Fairness and Machine
Learning, Solon Barocas
and Moritz Hardt and
Arvind Narayanan (2019)
https://fairmlbook.org/
~recall parity ~precision parity
Mitigation strategies
pre-processing:
in-processing:
post-processing:
dataset
remove bias
from dataset
dataset
train
the model
train
the model
train a
fairness
aware model
validation
validation
validation
adjust
thresholds
* Adversarial Debiasing
* Reduction method
* suppression
* resampling
* massaging
26
Suppression
Remove sensitive variable(s) and features highly correlated with
them
Massaging
Re-label enough "minority" observations so that a new "massaged" training
dataset is reached which satisfies demographic parity to begin with.
The choice of which observations to be re-labeled is done via training an
auxiliary model on the original target
Sampling
Resample observations in order to reach Demographic Parity
pre-
processing
Fair Representation
Learn a representation of data such that sensitive information is
removed while keeping as much information as possible from X
"Learning fair
representations", Zemel,
Rich, et al. International
conference on machine
learning. PMLR, 2013
“Data preprocessing
techniques for
classification without
discrimination”, F.
Kamiran and T. Calders,
Knowledge and Information
Systems, 2012
27
Idea: a model tries to
maximize performance while an
Adversary tries to reconstruct
the protected variable Z from
the model’s outputs
Goal: train using the following gradient steps
Estimates Y pushes against
the Adversary
reconstructs Z
and removes its
effect
Diagram illustrating the gradients
Without the projection term, in the pictured
scenario, the predictor would move in the
direction labelled g+h in the diagram, which
actually helps the adversary. With the projection
term, the predictor will never move in a direction
that helps the adversary.
in-
processing
“Mitigating unwanted biases with
adversarial learning”, AI B. H.
Zhang, B. Lemoine, and M. Mitchell,
Proceedings of the 2018 AAAI/ACM
Conference on Ethics, and Society,
2018
here Z is the protected
variable
U step
W step
28
General idea
Tweak the threshold for different groups with respect to the sensitive variable
Equality of Odds
Need to look at the ROC
Demographic Parity
straightforward
Plots from Barocas, Hardt, Narayanan https://fairmlbook.org
“Equality of opportunity
in supervised learning”,
Hardt, Price,
Srebro,, NeurIPS 2016
post-
processing
A roadmap to fairness
Project
description
and objectives
Joint collaboration to research on Trustworthy AI in
Financial domain.
The goal is to overview the available metrics and
techniques and to come up with a roadmap to follow in
order to pursue Fairness.
To reach this goal, we carried out explorations on a
real-world financial use-case of credit lending.
Collect tools of assessment, mitigation, visualization
into a fairness toolbox called BeFair.
«BeFair: addressing
Fairness in the Banking
sector» Castelnovo,
Crupi, Greco, Del Gamba,
Naseer, Regoli, San
Miguel Gonzalez, (2020
IEEE Big Data
Conference)
REGULATORY
ASPECTS
What are the principles
to be followed
to identify the
appropriate concept of
fairness to be pursued?
DATASET
ASSESSMENT
Target assessment:
is there human intervention?
Features assessment:
what information is compatible
with the pursued fairness concept?
Correlations with protected classes
Choose the metric(s) that best
comply with previous findings
1.
2.
3.
BIAS
MITIGATION
Apply a set of mitigation
strategies in order to
target the chosen metric
4.
COMPARISON
AND CHOICE
Compare and choose the
best strategy
5.
CHOICE OF THE
FAIRNESS METRICS
EXPERT KNOWLEDGE
CONTEXT BASED FAIRNESS
DEFINITION
BUSINESS
NEEDS AND
LOGICS
LEGAL AND DOMAIN
KNOWLEDGE
Credit Lending
use case
~200,000 loan applications
~50 predictors, including financial variables and
personal information.
The target is the final decision of a human officer.
Dataset assessment
Throughout the analysis, we focus on
CITIZENSHIP = {0, 1}
as sensitive attribute with respect to which assess
fairness.
Bias, measured in terms of Demographic Parity, is
negligible in the original target, but amplified by a
the application of a ML model.
2
«BeFair: addressing
Fairness in the Banking
sector» Castelnovo,
Crupi, Greco, Del Gamba,
Naseer, Regoli, San
Miguel Gonzalez, (2020
IEEE Big Data
Conference)
3
% loans allowed to citizens: 75,5%
% loans allowed to non-citizens: 75,1%
% loans allowed to citizens: 77,5%
% loans allowed to non-citizens: 51,7%
Mitigated Model – Group Level Mitigated Model – Individual Level
Bias
mitigation
Demographic
Parity
Error Rate
Parity
Individual
Fairness
FTU
Suppression
Massaging
Sampling
CFF
AdvDP
AdvEO
AdvCDP
ReductionsGS
ReductionsEG
ThreshDP
ThreshEO
ThreshEopp
ThreshCDP
pre
in
post
BeFair: developed methods and fairness goal
4
Counterfactual Fairness
Build causal graph with causal
discovery algorithms and
validate with domain experts.
Employ the causal graph to
train a counterfactually fair
model (Kusner et al. 2017): no
causal flow from sensitive
attribute to final decision.
Nodes are variables, while
directed edges express the
causal relationships among
them.
4
BeFair:
bias mitigation
results
5
«BeFair: addressing
Fairness in the Banking
sector» Castelnovo,
Crupi, Greco, Del Gamba,
Naseer, Regoli, San
Miguel Gonzalez, (2020
IEEE Big Data
Conference)
Proposed methods to
identify the best
perfomance-fairness
tradeoff:
Trade-off fairness-
performance
Constrained performance
1 + 𝛽2
1 − 𝜙 ∗ 𝜋
𝛽2 ∗ 1 − 𝜙 + 𝜋
max
𝜙< 𝛷
𝜋
π and φ are the preferred performance and
fairness metrics, respectively and beta is the
weight associated with the performance metric.
Comparison and choice
5
38
Conclusions
Bias discrimination is a concrete risk for AI
applications at scale.
…a crucial point is that Fairness in Machine
Learning cannot be left to Data Scientists only.
More research is needed on the ethical and legal
side to clarify the needs of specific domains.
More research is needed on the technical side,
e.g. to understand the relationship among
different fairness metrics, in particular group
vs individual, or to find mitigation strategies
enforcing different metrics.
Fairness concepts are manifold, and care should
be taken in any specific situation.
Castelnovo, Crupi, Greco, Regoli, The zoo of Fairness metrics in Machine
Learning, preprint (2021)
Barocas, Selbst, Big data's disparate impact, Calif. L. Rev. (2016)
Mehrabi et al. A survey on bias and fairness in machine learning, ACM
Computing Surveys (2021)
Zhang, Hu, Mitchell, Mitigating unwanted biases with adversarial learning,
Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society
(2018)
Kamiran, Calders, Data preprocessing techniques for classification without
discrimination, Knowledge and Information Systems (2012)
Hardt, Price, Srebro, Equality of opportunity in supervised learning,
Advances in neural information processing systems (2016)
SOME
REFERENCES
Barocas, Hardt, Narayanan, Fairness and machine learning, (2019)
surveys
bias
mitigation
Zemel, Rich, et al. Learning fair representations, International conference
on machine learning. PMLR, 2013
thank you daniele regoli
Big Data Lab @ Intesa Sanpaolo
daniele.regoli@intesasanpaolo.com

More Related Content

What's hot

Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Krishnaram Kenthapadi
 
Frameworks for Algorithmic Bias
Frameworks for Algorithmic BiasFrameworks for Algorithmic Bias
Frameworks for Algorithmic BiasJonathan Stray
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Krishnaram Kenthapadi
 
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...Shift Conference
 
Semantics of the Black-Box: Using knowledge-infused learning approach to make...
Semantics of the Black-Box: Using knowledge-infused learning approach to make...Semantics of the Black-Box: Using knowledge-infused learning approach to make...
Semantics of the Black-Box: Using knowledge-infused learning approach to make...Artificial Intelligence Institute at UofSC
 
Decision Intelligence: a new discipline emerges
Decision Intelligence: a new discipline emergesDecision Intelligence: a new discipline emerges
Decision Intelligence: a new discipline emergesLorien Pratt
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
 
Neo4j - Responsible AI
Neo4j - Responsible AINeo4j - Responsible AI
Neo4j - Responsible AINeo4j
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
Knowledge Graphs and their central role in big data processing: Past, Present...
Knowledge Graphs and their central role in big data processing: Past, Present...Knowledge Graphs and their central role in big data processing: Past, Present...
Knowledge Graphs and their central role in big data processing: Past, Present...Amit Sheth
 
“Improving” prediction of human behavior using behavior modification
“Improving” prediction of human behavior using behavior modification“Improving” prediction of human behavior using behavior modification
“Improving” prediction of human behavior using behavior modificationGalit Shmueli
 
Don't Handicap AI without Explicit Knowledge
Don't Handicap AI  without Explicit KnowledgeDon't Handicap AI  without Explicit Knowledge
Don't Handicap AI without Explicit KnowledgeAmit Sheth
 
An empirical study on algorithmic bias (aiml compsac2020)
An empirical study on algorithmic bias (aiml compsac2020)An empirical study on algorithmic bias (aiml compsac2020)
An empirical study on algorithmic bias (aiml compsac2020)Kishor Datta Gupta
 
Explainable AI
Explainable AIExplainable AI
Explainable AIDinesh V
 

What's hot (19)

Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
Frameworks for Algorithmic Bias
Frameworks for Algorithmic BiasFrameworks for Algorithmic Bias
Frameworks for Algorithmic Bias
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
 
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...
Shift AI 2020: Deep Learning in Intelligent Process Automation - Slater Victo...
 
Semantics of the Black-Box: Using knowledge-infused learning approach to make...
Semantics of the Black-Box: Using knowledge-infused learning approach to make...Semantics of the Black-Box: Using knowledge-infused learning approach to make...
Semantics of the Black-Box: Using knowledge-infused learning approach to make...
 
Decision Intelligence: a new discipline emerges
Decision Intelligence: a new discipline emergesDecision Intelligence: a new discipline emerges
Decision Intelligence: a new discipline emerges
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
 
Neo4j - Responsible AI
Neo4j - Responsible AINeo4j - Responsible AI
Neo4j - Responsible AI
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
Knowledge Graphs and their central role in big data processing: Past, Present...
Knowledge Graphs and their central role in big data processing: Past, Present...Knowledge Graphs and their central role in big data processing: Past, Present...
Knowledge Graphs and their central role in big data processing: Past, Present...
 
“Improving” prediction of human behavior using behavior modification
“Improving” prediction of human behavior using behavior modification“Improving” prediction of human behavior using behavior modification
“Improving” prediction of human behavior using behavior modification
 
Don't Handicap AI without Explicit Knowledge
Don't Handicap AI  without Explicit KnowledgeDon't Handicap AI  without Explicit Knowledge
Don't Handicap AI without Explicit Knowledge
 
An empirical study on algorithmic bias (aiml compsac2020)
An empirical study on algorithmic bias (aiml compsac2020)An empirical study on algorithmic bias (aiml compsac2020)
An empirical study on algorithmic bias (aiml compsac2020)
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 

Similar to Regoli fairness deep_learningitalia_20220127

CREDIT CARD FRAUD DETECTION USING MACHINE LEARNING
CREDIT CARD FRAUD DETECTION USING MACHINE LEARNINGCREDIT CARD FRAUD DETECTION USING MACHINE LEARNING
CREDIT CARD FRAUD DETECTION USING MACHINE LEARNINGIRJET Journal
 
j.eswa.2019.03.014.pdf
j.eswa.2019.03.014.pdfj.eswa.2019.03.014.pdf
j.eswa.2019.03.014.pdfJAHANZAIBALVI3
 
MACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTION
MACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTIONMACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTION
MACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTIONmlaij
 
B510519.pdf
B510519.pdfB510519.pdf
B510519.pdfaijbm
 
Using fairness metrics to solve ethical dilemmas of machine learning
Using fairness metrics to solve ethical dilemmas of machine learningUsing fairness metrics to solve ethical dilemmas of machine learning
Using fairness metrics to solve ethical dilemmas of machine learningLászló Kovács
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...Lviv Startup Club
 
Image Based Facial Recognition
Image Based Facial RecognitionImage Based Facial Recognition
Image Based Facial Recognitionijtsrd
 
Beyond-Accuracy Perspectives: Explainability and Fairness
Beyond-Accuracy Perspectives: Explainability and FairnessBeyond-Accuracy Perspectives: Explainability and Fairness
Beyond-Accuracy Perspectives: Explainability and FairnessErasmo Purificato
 
Defining the boundary for AI research in Intelligent Systems Dec 2021
Defining the boundary for AI research in Intelligent Systems Dec  2021Defining the boundary for AI research in Intelligent Systems Dec  2021
Defining the boundary for AI research in Intelligent Systems Dec 2021Parasuram Balasubramanian
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...
Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...
Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...Melissa Moody
 
Investigating the Impact of Publicly Naming Biased Performance Results of Com...
Investigating the Impact of Publicly Naming Biased Performance Results of Com...Investigating the Impact of Publicly Naming Biased Performance Results of Com...
Investigating the Impact of Publicly Naming Biased Performance Results of Com...ogulcanbayram
 
Presentation1.pptx
Presentation1.pptxPresentation1.pptx
Presentation1.pptxVishalLabde
 
“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...
“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...
“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...Edge AI and Vision Alliance
 
Presentation1.pptx
Presentation1.pptxPresentation1.pptx
Presentation1.pptxVishalLabde
 
Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...
Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...
Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...IRJET Journal
 
Concept drift and machine learning model for detecting fraudulent transaction...
Concept drift and machine learning model for detecting fraudulent transaction...Concept drift and machine learning model for detecting fraudulent transaction...
Concept drift and machine learning model for detecting fraudulent transaction...IJECEIAES
 
AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AnandSRao1962
 

Similar to Regoli fairness deep_learningitalia_20220127 (20)

CREDIT CARD FRAUD DETECTION USING MACHINE LEARNING
CREDIT CARD FRAUD DETECTION USING MACHINE LEARNINGCREDIT CARD FRAUD DETECTION USING MACHINE LEARNING
CREDIT CARD FRAUD DETECTION USING MACHINE LEARNING
 
j.eswa.2019.03.014.pdf
j.eswa.2019.03.014.pdfj.eswa.2019.03.014.pdf
j.eswa.2019.03.014.pdf
 
MACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTION
MACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTIONMACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTION
MACHINE LEARNING ALGORITHMS FOR CREDIT CARD FRAUD DETECTION
 
B510519.pdf
B510519.pdfB510519.pdf
B510519.pdf
 
Using fairness metrics to solve ethical dilemmas of machine learning
Using fairness metrics to solve ethical dilemmas of machine learningUsing fairness metrics to solve ethical dilemmas of machine learning
Using fairness metrics to solve ethical dilemmas of machine learning
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
CREDIT_CARD.ppt
CREDIT_CARD.pptCREDIT_CARD.ppt
CREDIT_CARD.ppt
 
Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...Ramon van den Akker. Fairness of machine learning models an overview and prac...
Ramon van den Akker. Fairness of machine learning models an overview and prac...
 
Image Based Facial Recognition
Image Based Facial RecognitionImage Based Facial Recognition
Image Based Facial Recognition
 
Beyond-Accuracy Perspectives: Explainability and Fairness
Beyond-Accuracy Perspectives: Explainability and FairnessBeyond-Accuracy Perspectives: Explainability and Fairness
Beyond-Accuracy Perspectives: Explainability and Fairness
 
Defining the boundary for AI research in Intelligent Systems Dec 2021
Defining the boundary for AI research in Intelligent Systems Dec  2021Defining the boundary for AI research in Intelligent Systems Dec  2021
Defining the boundary for AI research in Intelligent Systems Dec 2021
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...
Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...
Improving Credit Card Fraud Detection: Using Machine Learning to Profile and ...
 
Investigating the Impact of Publicly Naming Biased Performance Results of Com...
Investigating the Impact of Publicly Naming Biased Performance Results of Com...Investigating the Impact of Publicly Naming Biased Performance Results of Com...
Investigating the Impact of Publicly Naming Biased Performance Results of Com...
 
Presentation1.pptx
Presentation1.pptxPresentation1.pptx
Presentation1.pptx
 
“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...
“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...
“Responsible AI and ModelOps in Industry: Practical Challenges and Lessons Le...
 
Presentation1.pptx
Presentation1.pptxPresentation1.pptx
Presentation1.pptx
 
Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...
Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...
Unfolding the Credit Card Fraud Detection Technique by Implementing SVM Algor...
 
Concept drift and machine learning model for detecting fraudulent transaction...
Concept drift and machine learning model for detecting fraudulent transaction...Concept drift and machine learning model for detecting fraudulent transaction...
Concept drift and machine learning model for detecting fraudulent transaction...
 
AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AI Developments and Trends (OECD)
AI Developments and Trends (OECD)
 

Recently uploaded

Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts ServiceCall Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Servicejennyeacort
 
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...Suhani Kapoor
 
Data Warehouse , Data Cube Computation
Data Warehouse   , Data Cube ComputationData Warehouse   , Data Cube Computation
Data Warehouse , Data Cube Computationsit20ad004
 
Ukraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSUkraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSAishani27
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 
Call Girls In Mahipalpur O9654467111 Escorts Service
Call Girls In Mahipalpur O9654467111  Escorts ServiceCall Girls In Mahipalpur O9654467111  Escorts Service
Call Girls In Mahipalpur O9654467111 Escorts ServiceSapana Sha
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...Suhani Kapoor
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
Spark3's new memory model/management
Spark3's new memory model/managementSpark3's new memory model/management
Spark3's new memory model/managementakshesh doshi
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfLars Albertsson
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxStephen266013
 
Data Science Project: Advancements in Fetal Health Classification
Data Science Project: Advancements in Fetal Health ClassificationData Science Project: Advancements in Fetal Health Classification
Data Science Project: Advancements in Fetal Health ClassificationBoston Institute of Analytics
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 

Recently uploaded (20)

꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
 
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts ServiceCall Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
Call Girls In Noida City Center Metro 24/7✡️9711147426✡️ Escorts Service
 
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
 
Data Warehouse , Data Cube Computation
Data Warehouse   , Data Cube ComputationData Warehouse   , Data Cube Computation
Data Warehouse , Data Cube Computation
 
Ukraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICSUkraine War presentation: KNOW THE BASICS
Ukraine War presentation: KNOW THE BASICS
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 
Call Girls In Mahipalpur O9654467111 Escorts Service
Call Girls In Mahipalpur O9654467111  Escorts ServiceCall Girls In Mahipalpur O9654467111  Escorts Service
Call Girls In Mahipalpur O9654467111 Escorts Service
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
Spark3's new memory model/management
Spark3's new memory model/managementSpark3's new memory model/management
Spark3's new memory model/management
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdf
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docx
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Data Science Project: Advancements in Fetal Health Classification
Data Science Project: Advancements in Fetal Health ClassificationData Science Project: Advancements in Fetal Health Classification
Data Science Project: Advancements in Fetal Health Classification
 
Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...
Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...
Russian Call Girls Dwarka Sector 15 💓 Delhi 9999965857 @Sabina Modi VVIP MODE...
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 

Regoli fairness deep_learningitalia_20220127

  • 1. Data, algorithms and discrimination Fairness in AI daniele regoli Big Data Lab @ Intesa Sanpaolo daniele.regoli@intesasanpaolo.com 2022.01.27 Deep Learning Italia
  • 2. milano torino Intesa Sanpaolo Big Data Lab des dev dep ign. elop. loy.
  • 3. 2016 2018 2021 The Team 30% MATHEMATHICS BIG DATA LAB DATA SCIENTISTS: 49 TORINO MILANO 25% PHYSICS 25% ENGINEERING 10% FINANCE 10% COMPUTER SCIENCE our numbers 5 49 we’ve been growing quite a lot
  • 5. Why we talk about fairness in AI The zoo of fairness metrics A roadmap to fairness Mitigation strategies Error rate parities & the COMPAS debate
  • 6. Why we talk about fairness in AI?
  • 8. risks for people this can have a huge impact on people’s lives e.g. Recruiting / Loans approval ML could amplify and perpetuate biases already present in data, at large scale data as a social mirror ML could disgregard minority groups, effectively producing bias even if absent in the data sample size imbalances
  • 11. The zoo of fairness metrics
  • 12. there are a lot of different definitions of fairness, in general non compatible with one another FAIRNESS observational metrics causality- based metrics group fairness individual fairness Demographic fairness Error rate parity assessing fairness
  • 13. ground truth, e.g. repayment of loans model decision, e.g. grant/don’t grant loan sensitive feature(s), e.g. citizenship, gender non-sensitive feature(s), e.g. income Machine Learning model
  • 14. same percentage of loans granted to men and women same error rates for men and women DEMOGRAPHIC PARITY CONDITIONAL DEMOGRAPHIC PARITY given some characteristics, same percentage of loans granted to men and women EQUALITY OF ODDS (some) group fairness metrics independence separation
  • 15. FAIRNESS THROUGH AWARENESS similar individuals are given similar decisions FAIRNESS THROUGH UNAWARENESS model’s outcomes are functions of non- sensitive features only (some) individual fairness metrics "Fairness through awareness“, Dwork, Cynthia, et al. Proceedings of the 3rd innovations in theoretical computer science conference. 2012
  • 16. the devil is in the details group fairness vs individual fairness domestic foreign financial status loan granted loan rejected fairness through unawareness: don’t use explicitely the sensitive variable
  • 17. the devil is in the details domestic foreign demographic parity: same acceptance rate for different groups widely cited 4/5 rule of the Uniform Guidelines on Employee Selection Procedures group fairness vs individual fairness financial status loan granted loan rejected
  • 18. Demographic parity makes sense when you want to enforce equality between groups whose distributions you deem should be equal if there were no bias Fairness Through Unawareness: problems with correlations, what variable are we allowed to use?
  • 19. Demographic Parity Conditional Demographic Parity Fairness Through Unawareness Suppression information of contained in group individual metrics techniques «The zoo of Fairness metrics in Machine Learning», Castelnovo, Crupi, Greco, Regoli, preprint 2021 fairness metrics landscape
  • 20. Error rate parities & the COMPAS debate
  • 21. the devil is in the details The COMPAS debate: error rate parities are not all the same White Black precision = 75% score re-offend does not re-offend precision = 75% Predictive Parity ~
  • 22. the devil is in the details The COMPAS debate: error rate parities are not all the same White Black score recall = 60% recall = 86% re-offend does not re-offend Equality of opportunity ~
  • 23. Impossibility Theorem Fairness and Machine Learning, Solon Barocas and Moritz Hardt and Arvind Narayanan (2019) https://fairmlbook.org/ ~recall parity ~precision parity
  • 25. pre-processing: in-processing: post-processing: dataset remove bias from dataset dataset train the model train the model train a fairness aware model validation validation validation adjust thresholds * Adversarial Debiasing * Reduction method * suppression * resampling * massaging
  • 26. 26 Suppression Remove sensitive variable(s) and features highly correlated with them Massaging Re-label enough "minority" observations so that a new "massaged" training dataset is reached which satisfies demographic parity to begin with. The choice of which observations to be re-labeled is done via training an auxiliary model on the original target Sampling Resample observations in order to reach Demographic Parity pre- processing Fair Representation Learn a representation of data such that sensitive information is removed while keeping as much information as possible from X "Learning fair representations", Zemel, Rich, et al. International conference on machine learning. PMLR, 2013 “Data preprocessing techniques for classification without discrimination”, F. Kamiran and T. Calders, Knowledge and Information Systems, 2012
  • 27. 27 Idea: a model tries to maximize performance while an Adversary tries to reconstruct the protected variable Z from the model’s outputs Goal: train using the following gradient steps Estimates Y pushes against the Adversary reconstructs Z and removes its effect Diagram illustrating the gradients Without the projection term, in the pictured scenario, the predictor would move in the direction labelled g+h in the diagram, which actually helps the adversary. With the projection term, the predictor will never move in a direction that helps the adversary. in- processing “Mitigating unwanted biases with adversarial learning”, AI B. H. Zhang, B. Lemoine, and M. Mitchell, Proceedings of the 2018 AAAI/ACM Conference on Ethics, and Society, 2018 here Z is the protected variable U step W step
  • 28. 28 General idea Tweak the threshold for different groups with respect to the sensitive variable Equality of Odds Need to look at the ROC Demographic Parity straightforward Plots from Barocas, Hardt, Narayanan https://fairmlbook.org “Equality of opportunity in supervised learning”, Hardt, Price, Srebro,, NeurIPS 2016 post- processing
  • 29. A roadmap to fairness
  • 30. Project description and objectives Joint collaboration to research on Trustworthy AI in Financial domain. The goal is to overview the available metrics and techniques and to come up with a roadmap to follow in order to pursue Fairness. To reach this goal, we carried out explorations on a real-world financial use-case of credit lending. Collect tools of assessment, mitigation, visualization into a fairness toolbox called BeFair. «BeFair: addressing Fairness in the Banking sector» Castelnovo, Crupi, Greco, Del Gamba, Naseer, Regoli, San Miguel Gonzalez, (2020 IEEE Big Data Conference)
  • 31. REGULATORY ASPECTS What are the principles to be followed to identify the appropriate concept of fairness to be pursued? DATASET ASSESSMENT Target assessment: is there human intervention? Features assessment: what information is compatible with the pursued fairness concept? Correlations with protected classes Choose the metric(s) that best comply with previous findings 1. 2. 3. BIAS MITIGATION Apply a set of mitigation strategies in order to target the chosen metric 4. COMPARISON AND CHOICE Compare and choose the best strategy 5. CHOICE OF THE FAIRNESS METRICS EXPERT KNOWLEDGE CONTEXT BASED FAIRNESS DEFINITION BUSINESS NEEDS AND LOGICS LEGAL AND DOMAIN KNOWLEDGE
  • 32. Credit Lending use case ~200,000 loan applications ~50 predictors, including financial variables and personal information. The target is the final decision of a human officer. Dataset assessment Throughout the analysis, we focus on CITIZENSHIP = {0, 1} as sensitive attribute with respect to which assess fairness. Bias, measured in terms of Demographic Parity, is negligible in the original target, but amplified by a the application of a ML model. 2 «BeFair: addressing Fairness in the Banking sector» Castelnovo, Crupi, Greco, Del Gamba, Naseer, Regoli, San Miguel Gonzalez, (2020 IEEE Big Data Conference)
  • 33. 3 % loans allowed to citizens: 75,5% % loans allowed to non-citizens: 75,1% % loans allowed to citizens: 77,5% % loans allowed to non-citizens: 51,7% Mitigated Model – Group Level Mitigated Model – Individual Level
  • 35. Counterfactual Fairness Build causal graph with causal discovery algorithms and validate with domain experts. Employ the causal graph to train a counterfactually fair model (Kusner et al. 2017): no causal flow from sensitive attribute to final decision. Nodes are variables, while directed edges express the causal relationships among them. 4
  • 36. BeFair: bias mitigation results 5 «BeFair: addressing Fairness in the Banking sector» Castelnovo, Crupi, Greco, Del Gamba, Naseer, Regoli, San Miguel Gonzalez, (2020 IEEE Big Data Conference)
  • 37. Proposed methods to identify the best perfomance-fairness tradeoff: Trade-off fairness- performance Constrained performance 1 + 𝛽2 1 − 𝜙 ∗ 𝜋 𝛽2 ∗ 1 − 𝜙 + 𝜋 max 𝜙< 𝛷 𝜋 π and φ are the preferred performance and fairness metrics, respectively and beta is the weight associated with the performance metric. Comparison and choice 5
  • 38. 38 Conclusions Bias discrimination is a concrete risk for AI applications at scale. …a crucial point is that Fairness in Machine Learning cannot be left to Data Scientists only. More research is needed on the ethical and legal side to clarify the needs of specific domains. More research is needed on the technical side, e.g. to understand the relationship among different fairness metrics, in particular group vs individual, or to find mitigation strategies enforcing different metrics. Fairness concepts are manifold, and care should be taken in any specific situation.
  • 39. Castelnovo, Crupi, Greco, Regoli, The zoo of Fairness metrics in Machine Learning, preprint (2021) Barocas, Selbst, Big data's disparate impact, Calif. L. Rev. (2016) Mehrabi et al. A survey on bias and fairness in machine learning, ACM Computing Surveys (2021) Zhang, Hu, Mitchell, Mitigating unwanted biases with adversarial learning, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (2018) Kamiran, Calders, Data preprocessing techniques for classification without discrimination, Knowledge and Information Systems (2012) Hardt, Price, Srebro, Equality of opportunity in supervised learning, Advances in neural information processing systems (2016) SOME REFERENCES Barocas, Hardt, Narayanan, Fairness and machine learning, (2019) surveys bias mitigation Zemel, Rich, et al. Learning fair representations, International conference on machine learning. PMLR, 2013
  • 40. thank you daniele regoli Big Data Lab @ Intesa Sanpaolo daniele.regoli@intesasanpaolo.com