SlideShare a Scribd company logo
1 of 46
Download to read offline
AI Colloquium by AAISI (online), 30.3.2021
Bias in AI systems: A multi-step approach
Eirini Ntoutsi
Free University Berlin
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
3
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Successful applications
4
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Recommendations Navigation
Severe weather alerts
Automation
Questionable uses/ failures
5
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Google flu trends failure Microsoft’s bot Tay taken offline
after racist tweets
IBM’s Watson for Oncology
cancelled
Facial recognition works better
for white males
Why AI-projects might fail?
 Back to basics: How machines learn
 Machine Learning gives computers the ability to learn without being
explicitly programmed (Arthur Samuel, 1959)
 We don’t codify the solution, we don’t even know it!
 DATA & the learning algorithms are the keys
6
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Mind the (hidden) assumptions
 Assumptions include: stationarity, independent & identically distributed
(iid) data, balanced class representation, ...
 In this talk, I will focus on the assumption/myth of algorithmic objectivity
1. The common misconception that humans are subjective, but data and
algorithms not and therefore they cannot discriminate.
7
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Reality check: Can algorithms discriminate?
 Bloomberg analysts compared Amazon same-day delivery areas with U.S.
Census Bureau data
 They found that in 6 major same-day delivery cities, the service area
excludes predominantly black ZIP codes to varying degrees.
 Shouldn’t this service be based on customer’s spend rather than race?
 Amazon claimed that race was not used in their models.
8
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
Reality check cont’: Can algorithms discriminate?
 There have been already plenty of cases of algorithmic discrimination
 State of the art visions systems (used e.g. in autonomous driving) recognize
better white males than black women (racial and gender bias)
 Google’s AdFisher tool for serving personalized ads was found to serve
significantly fewer ads for high paid jobs to women than men (gender-bias)
 COMPAS tool (US) for predicting a defendant’s risk of committing another
crime predicted higher risks of recidivism for black defendants (and lower for
white defendants) than their actual risk (racial-bias)
9
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Dont blame (only) the AI
 “Bias is as old as human civilization” and “it is human nature for members
of the dominant majority to be oblivious to the experiences of other
groups”
 Human bias: a prejudice in favour of or against one thing, person, or group
compared with another usually in a way that’s considered to be unfair.
 Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual
orientation …
 Algorithmic bias: the inclination or prejudice of a decision made by an AI
system which is for or against one person or group, especially in a way
considered to be unfair.
10
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Every bias is not necessarily a bad bias 1/2
 Inductive bias ”refers to a set of (explicit or implicit) assumptions made by
a learning algorithm in order to perform induction, that is, to generalize a
finite set of observation (training data) into a general model of the
domain. Without a bias of that kind, induction would not be possible,
since the observations can normally be generalized in many ways.”
(Hüllermeier, Fober & Mernberger, 2013)
 Bias-free learning is futile: A learner that makes no a priori assumptions
regarding the identity of the target concept has no rational basis for
classifying any unseen instances.
11
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Model
Future unknown instances
Every bias is not necessarily a bad bias 2/2
 Some biases are positive and helpful, e.g., making healthy eating choices
 Some biases help us to become more efficient
 E.g., start work early if you are a morning person
 We refer here to bias that might cause discrimination and unfair actions
to an individual or group on the basis of protected attributes like race or
gender.
 Bias and gender are examples of bias triggers, but not the only ones
 Combined triggers as well, e.g., black females in IT jobs
 Bias and discrimination depend on context
 Women in tech
 Males in ballet dancing
12
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
13
Eirini Ntoutsi Bias in AI systems: A multi-step approach
The fairness-aware machine learning domain
 A young, fast evolving, multi-disciplinary research field
 Bias/fairness/discrimination/… have been studied for long in philosophy, social
sciences, law, …
 Existing approaches can be divided into three categories
 Understanding bias
 How bias is created in the society and enters our sociotechnical systems, is
manifested in the data used by AI algorithms, and can be formalized.
 Mitigating bias
 Approaches that tackle bias in different stages of AI-decision making.
 Accounting for bias
 Approaches that account for bias proactively or retroactively.
14
Eirini Ntoutsi Bias in AI systems: A multi-step approach
E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder-
Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in
data-driven artificial intelligence systems—An introductory survey", WIREs Data Mining and Knowledge Discovery, 2020.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
16
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Understanding bias: Sociotechnical causes of bias
 AI-systems rely on data generated by humans (UGC) or collected via
systems created by humans.
 As a result human biases
 enter AI systems
 E.g., bias in word-embeddings (Bolukbasi et al, 2016)
 might be amplified by complex sociotechnical systems such as the Web and,
 new types of biases might be created
17
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Understanding bias: How is bias manifested in data?
 Protected attributes and proxies
 E.g., neighborhoods in U.S. cities are highly correlated with race
 Representativeness of data
 E.g., underrepresentation of women and people of color in IT developer
communities and image datasets
 E.g., overrepresentation of black people in drug-related arrests
 Depends on data modalities
18
Eirini Ntoutsi Bias in AI systems: A multi-step approach
https://incitrio.com/top-3-lessons-learned-from-
the-top-12-marketing-campaigns-ever/
https://ellengau.medium.com/emily-in-
paris-asian-women-i-know-arent-like-
mindy-chen-6228e63da333
Typical (batch) fairness-aware learning setup
 Input: D = training dataset drawn from a joint distribution P(F,S,y)
 F: set of non-protected attributes
 S: (typically: binary, single) protected attribute
 s (s ̄): protected (non-protected) group
 y = (typically: binary) class attribute {+,-} (+ for accepted, - for rejected)
 Goal of fairness-aware classification: Learn a mapping from f(F, S) → y
 achieves good predictive performance
 eliminates discrimination
19
Eirini Ntoutsi Bias in AI systems: A multi-step approach
F1 F2 S y
User1 f11 f12 s +
User2 f21 -
User3 f31 f23 s +
… … … … …
Usern fn1 +
We know how to measure this
According to some fairness measure
Measuring (un)fairness: some measures
 Statistical parity: If subjects in both protected and unprotected groups
should have equal probability of being assigned to the positive class
𝑃 ො
𝑦 = + 𝑆 = 𝑠 = 𝑃 ො
𝑦 = + 𝑆 = ҧ
𝑠
 Equal opportunity: There should be no difference in model’s prediction
errors regarding the positive class
𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠+ = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠+
 Disparate Mistreatment: There should be no difference in model’s
prediction errors between protected and non-protected groups for both
classes
𝛿FNR = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠+ − 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠+
𝛿FPR = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠− − 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠−
Disparate Mistreatment = 𝛿𝐹𝑁𝑅 + 𝛿𝐹𝑃𝑅
22
Eirini Ntoutsi Bias in AI systems: A multi-step approach
F1 F2 S y ෝ
𝒚
User1 f11 f12 s + -
User2 f21 - +
User3 f31 f23 s + -
… … … … … …
Usern fn1 + +
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
24
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
26
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: pre-processing approaches
 Intuition: making the data more fair will result in a less unfair model
 Idea: balance the protected and non-protected groups in the dataset
 Design principle: minimal data interventions (to retain data utility for the
learning task)
 Different techniques:
 Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong,
Ruggieri, & Turini, 2011)
 Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders,
2012)
 Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)
 Synthetic instance generation (Iosifidis & Ntoutsi, 2018)
 …
27
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: pre-processing approaches: Massaging
 Change the class label of carefully selected instances (Kamiran & Calders, 2009).
 The selection is based on a ranker which ranks the individuals by their probability to
receive the favorable outcome.
 The number of massaged instances depends on the fairness measure (group fairness)
28
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Image credit Vasileios Iosifidis
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
30
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: in-processing approaches
 Intuition: working directly with the algorithm allows for better control
 Idea: explicitly incorporate the model’s discrimination behavior in the
objective function
 Design principle: “balancing” predictive- and fairness-performance
 Different techniques:
 Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho,
Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang
& Ntoutsi, 2019)
 Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)
 Training on latent target labels (Krasanakis, Xioufis, Papadopoulos &
Kompatsiaris, 2018)
 In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)
 …
31
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: in-processing approaches: change the objective
function 1/2
 An example with decision tree (DTs) classifiers
 Traditional DTs use pureness of the data (in terms of class-labels) to decide on
which attribute to use for splitting
 Which attribute to choose for splitting? A1 or A2?
 The goal is to select the attribute that is most useful for classifying examples.
 helps us be more certain about the class after the split
 we would like the resulting partitioning to be as pure as possible
 Pureness evaluated via entropy - A partition is pure if all its instances belong to the same class.
 But traditional measures (Information Gain, Gini Index, …) ignore fairness
32
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: in-processing approaches: change the objective
function 2/2
 We introduce the fairness gain of an attribute (FG)
 Disc(D) corresponds to statistical parity (group fairness)
 We introduce the joint criterion, fair information gain (FIG) that evaluates
the suitability of a candidate splitting attribute A in terms of both
predictive performance and fairness.
33
Eirini Ntoutsi Bias in AI systems: A multi-step approach
D
D1
D2
W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
Mitigating bias
 Goal: tackling bias in different stages of AI-decision making
34
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
Mitigating bias: post-processing approaches
 Intuition: start with predictive performance
 Idea: first optimize the model for predictive performance and then tune
for fairness
 Design principle: minimal interventions (to retain model predictive
performance)
 Different techniques:
 Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders &
Verwer, 2010)
 Correct the class labels (Kamiran et al., 2010)
 Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt,
Price, & Srebro, 2016)
 Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík,
Langford, & Wallach, 2018)
 …
35
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Mitigating bias: pοst-processing approaches: shift the decision
boundary
 An example of decision boundary shift
36
Eirini Ntoutsi Bias in AI systems: A multi-step approach
V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
37
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Accounting for bias
 Algorithmic accountability refers to the assignment of responsibility for
how an algorithm is created and its impact on society (Kaplan et al, 2019).
 Many facets of accountability for AI-driven algorithms and different
approaches
 Proactive approaches:
 bias-aware data collection, e.g., for Web data, crowd-sourcing
 Bias-description and modeling, e.g., via ontologies
 ...
 Retroactive approaches:
 Explaining AI decisions in order to understand whether decisions are biased
 What is an explanation? Explanations w.r.t. legal/ethical grounds?
 Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020)
38
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
40
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Fairness with sequential learners (boosting)
 Sequential ensemble methods generate base learners in a sequence
 The sequential generation of base learners promotes the dependence between
the base learners.
 Each learner learns from the mistakes of the previous predictor
 The weak learners are combined to build a strong learner
 Popular examples: Adaptive Boosting (AdaBoost), Extreme Gradient Boosting
(XGBoost).
 Our base model is AdaBoost (Freund and Schapire, 1995), a sequential ensemble
method that in each round, re-weights the training data to focus on misclassified
instances.
41
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H()
Intuition behind using boosting for fairness
 It is easier to make “fairness-related interventions” in simpler models
rather than complex ones
 We can use the whole sequence of learners for the interventions instead
of the current one
44
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Limitations of related work
 Existing works evaluate predictive performance in terms of the overall
classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et
al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]
 In case of class-imbalance, ER is misleading
 Most of the datasets however suffer from imbalance
 Moreover, Dis.Mis. is “oblivious” to the class imbalance problem
47
Eirini Ntoutsi Bias in AI systems: A multi-step approach
From Adaboost to AdaFair
 We tailor AdaBoost to fairness
 We introduce the notion of cumulative fairness that assesses the fairness of
the model up to the current boosting round (partial ensemble).
 We directly incorporate fairness in the instance weighting process
(traditionally focusing on classification performance).
 We optimize the number of weak learners in the final ensemble based on
balanced error rate thus directly considering class imbalance in the best model
selection.
48
Eirini Ntoutsi Bias in AI systems: A multi-step approach
𝐸𝑅 = 1 −
𝑇𝑃 + 𝑇𝑁
𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃
V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019.
AdaFair: Cumulative boosting fairness
 Let j: 1−T be the current boosting round, T is user defined
 Let be the partial ensemble, up to current round j.
 The cumulative fairness of the ensemble up to round j, is defined based
on the parity in the predictions of the partial ensemble between
protected and non-protected groups
 ``Forcing’’ the model to consider ``historical’’ fairness over all previous
rounds instead of just focusing on current round hj() results in better
classifier performance and model convergence.
50
Eirini Ntoutsi Bias in AI systems: A multi-step approach
AdaFair: fairness-aware weighting of instances
 Vanilla AdaBoost already boosts misclassified instances for the next round
 Our weighting explicitly targets fairness by extra boosting discriminated
groups for the next round
 The data distribution at boosting round j+1 is updated as follows
 The fairness-related cost ui of instances xi ϵ D which belong to a group
that is discriminated is defined as follows:
51
Eirini Ntoutsi Bias in AI systems: A multi-step approach
AdaFair: optimizing the number of weak learners
 Typically, the number of boosting rounds/ weak learners T is user-defined
 We propose to select the optimal subsequence of learners 1 … θ, θ ≤ T
that minimizes the balanced error rate (BER)
 In particular, we consider both ER and BER in the objective function
𝑎𝑟𝑔𝑚𝑖𝑛𝜃 𝑐 ∗ 𝐵𝐸𝑅𝜃 + 1 − 𝑐 𝐸𝑅𝜃 + 𝑀𝑖𝑠. 𝐷𝑖𝑠.
 The result of this optimization if a final ensemble model with Eq.Odds
fairness
53
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Experimental evaluation
 Datasets of varying imbalance
 Baselines
 AdaBoost [Sch99]: vanilla AdaBoost
 SMOTEBoost [CLHB03]: AdaBoost with SMOTE for imbalanced data.
 Krasanakis et al. [KXPK18]: Boosting method which minimizes Dis.Mis. by approximating
the underlying distribution of hidden correct labels.
 Zafar et al.[ZVGRG17]: Training logistic regression model with convex-concave
constraints to minimize Dis.Mis.
 AdaFair NoCumul: Variation of AdaFair that computes the fairness weights based on
individual weak learners.
54
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Experiments: Predictive and fairness performance
 Adult census income (ratio 1+:3-)  Bank dataset (ratio 1+:8-)
Eirini Ntoutsi Bias in AI systems: A multi-step approach 55
Larger values are better, for Dis.Mis. lower values are better
 Our method achieves high balanced accuracy and low discrimination (Dis.Mis.) while maintaining high
TPRs and TNRs for both groups.
 The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more positive
instances (lowering TPRs).
Cumulative vs non-cumulative fairness
 Cumulative vs non-cumulative fairness impact on model performance
 Cumulative notion of fairness performs better
 The cumulative model (AdaFair) is more stable than its non-cumulative
counterpart (standard deviation is higher)
56
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Please note: Eq.Odds Dis.Mis.
Outline
 Introduction
 Dealing with bias in data-driven AI systems
 Understanding bias
 Mitigating bias
 Accounting for bias
 Case: bias-mitigation with sequential ensemble learners (boosting)
 Wrapping up
57
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Wrapping-up, ongoing work and future directions
 In this talk I focused on the myth of algorithmic objectivity and
 the reality of algorithmic bias and discrimination and how algorithms can pick biases
existing in the input data and further reinforce them
 A large body of research already exists but
 focuses mainly on fully-supervised batched learning with single-protected (and typically
binary) attributes with binary classes
 Moving from batch learning to online learning
 targets bias in some step of the analysis-pipeline, but biases/errors might be propagated
and even amplified (unified approached are needed)
 Moving from isolated approaches (pre-, in- or post-) to combined approaches
58
Eirini Ntoutsi Bias in AI systems: A multi-step approach
V. Iosifidis, E. Ntoutsi, “FABBOO - Online Fairness-aware Learning under Class Imbalance", DS 2020.
T. Hu, V. Iosifidis, W. Liao, H. Zang, M. Yang, E. Ntoutsi,B. Rosenhahn, "FairNN - Conjoint Learning of Fair Representations for Fair
Decisions”, DS 2020.
Wrapping-up, ongoing work and future directions
 Moving from single-protected attribute fairness-aware learning to multi-
fairness
 Existing legal studies define multi-fairness as compound, intersectional and
overlapping [Makkonen 2002].
 Moving from fully-supervised learning to unsupervised and reinforcement
learning
 Moving from myopic (maximize short-term/immediate performance) solutions
to non-myopic ones (that consider long-term performance) [Zhang et al,2020]
 Actionable approaches (counterfactual generation)
59
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Thank you for you attention!
60
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Questions?
https://nobias-project.eu/
@NoBIAS_ITN
https://lernmint.org/
Feel free to contact me:
• eirini.ntoutsi@fu-berlin.de
• @entoutsi
• https://www.mi.fu-
berlin.de/en/inf/groups/ag-KIML/index.html
https://www.bias-project.org/

More Related Content

What's hot

Fundamentals of Artificial Intelligence — QU AIO Leadership in AI
Fundamentals of Artificial Intelligence — QU AIO Leadership in AIFundamentals of Artificial Intelligence — QU AIO Leadership in AI
Fundamentals of Artificial Intelligence — QU AIO Leadership in AIJunaid Qadir
 
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...HWZ Hochschule für Wirtschaft
 
Defining the boundary for AI research in Intelligent Systems Dec 2021
Defining the boundary for AI research in Intelligent Systems Dec  2021Defining the boundary for AI research in Intelligent Systems Dec  2021
Defining the boundary for AI research in Intelligent Systems Dec 2021Parasuram Balasubramanian
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Krishnaram Kenthapadi
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Artificial Intelligence AI Topics History and Overview
Artificial Intelligence AI Topics History and OverviewArtificial Intelligence AI Topics History and Overview
Artificial Intelligence AI Topics History and Overviewbutest
 
Machine Learning Final presentation
Machine Learning Final presentation Machine Learning Final presentation
Machine Learning Final presentation AyanaRukasar
 
Machine learning ppt
Machine learning pptMachine learning ppt
Machine learning pptRajat Sharma
 
Fairness in Machine Learning
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine LearningDelip Rao
 
Explainable AI with H2O Driverless AI's MLI module
Explainable AI with H2O Driverless AI's MLI moduleExplainable AI with H2O Driverless AI's MLI module
Explainable AI with H2O Driverless AI's MLI moduleMartin Dvorak
 
Machine Learning Using Python
Machine Learning Using PythonMachine Learning Using Python
Machine Learning Using PythonSavitaHanchinal
 
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
 
Machine Learning: Applications, Process and Techniques
Machine Learning: Applications, Process and TechniquesMachine Learning: Applications, Process and Techniques
Machine Learning: Applications, Process and TechniquesRui Pedro Paiva
 
Don't Handicap AI without Explicit Knowledge
Don't Handicap AI  without Explicit KnowledgeDon't Handicap AI  without Explicit Knowledge
Don't Handicap AI without Explicit KnowledgeAmit Sheth
 
Machine Learning
Machine LearningMachine Learning
Machine Learningbutest
 
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...Maryam Farooq
 
Artificial intelligence: Simulation of Intelligence
Artificial intelligence: Simulation of IntelligenceArtificial intelligence: Simulation of Intelligence
Artificial intelligence: Simulation of IntelligenceAbhishek Upadhyay
 
Deep Neural Networks for Machine Learning
Deep Neural Networks for Machine LearningDeep Neural Networks for Machine Learning
Deep Neural Networks for Machine LearningJustin Beirold
 
Introduction to Machine Learning
Introduction to Machine LearningIntroduction to Machine Learning
Introduction to Machine LearningCloudxLab
 

What's hot (20)

Fundamentals of Artificial Intelligence — QU AIO Leadership in AI
Fundamentals of Artificial Intelligence — QU AIO Leadership in AIFundamentals of Artificial Intelligence — QU AIO Leadership in AI
Fundamentals of Artificial Intelligence — QU AIO Leadership in AI
 
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
Schlussreferat: Bias in Algorithmen Marcel Blattner, Chief Data Scientist, Ta...
 
Defining the boundary for AI research in Intelligent Systems Dec 2021
Defining the boundary for AI research in Intelligent Systems Dec  2021Defining the boundary for AI research in Intelligent Systems Dec  2021
Defining the boundary for AI research in Intelligent Systems Dec 2021
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Artificial Intelligence AI Topics History and Overview
Artificial Intelligence AI Topics History and OverviewArtificial Intelligence AI Topics History and Overview
Artificial Intelligence AI Topics History and Overview
 
Machine Learning Final presentation
Machine Learning Final presentation Machine Learning Final presentation
Machine Learning Final presentation
 
Machine learning ppt
Machine learning pptMachine learning ppt
Machine learning ppt
 
Fairness in Machine Learning
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine Learning
 
Explainable AI with H2O Driverless AI's MLI module
Explainable AI with H2O Driverless AI's MLI moduleExplainable AI with H2O Driverless AI's MLI module
Explainable AI with H2O Driverless AI's MLI module
 
Machine Learning Using Python
Machine Learning Using PythonMachine Learning Using Python
Machine Learning Using Python
 
machine learning
machine learningmachine learning
machine learning
 
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
 
Machine Learning: Applications, Process and Techniques
Machine Learning: Applications, Process and TechniquesMachine Learning: Applications, Process and Techniques
Machine Learning: Applications, Process and Techniques
 
Don't Handicap AI without Explicit Knowledge
Don't Handicap AI  without Explicit KnowledgeDon't Handicap AI  without Explicit Knowledge
Don't Handicap AI without Explicit Knowledge
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...
 
Artificial intelligence: Simulation of Intelligence
Artificial intelligence: Simulation of IntelligenceArtificial intelligence: Simulation of Intelligence
Artificial intelligence: Simulation of Intelligence
 
Deep Neural Networks for Machine Learning
Deep Neural Networks for Machine LearningDeep Neural Networks for Machine Learning
Deep Neural Networks for Machine Learning
 
Introduction to Machine Learning
Introduction to Machine LearningIntroduction to Machine Learning
Introduction to Machine Learning
 

Similar to Understanding and Mitigating Bias in AI

Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? University of Minnesota, Duluth
 
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?University of Minnesota, Duluth
 
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...VishnuPrasath86
 
AI Ethical Framework.pptx
AI Ethical Framework.pptxAI Ethical Framework.pptx
AI Ethical Framework.pptxDavid Atkinson
 
Artificial Intelligence and Ethics from MS It seems.docx
Artificial Intelligence and Ethics from MS It seems.docxArtificial Intelligence and Ethics from MS It seems.docx
Artificial Intelligence and Ethics from MS It seems.docx4934bk
 
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.sakshibhattco
 
ai_and_you_slide_template.pptx
ai_and_you_slide_template.pptxai_and_you_slide_template.pptx
ai_and_you_slide_template.pptxganeshjilo
 
Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...Willy Marroquin (WillyDevNET)
 
Artificial Intelligence: Shaping the Future of Technology
Artificial Intelligence: Shaping the Future of TechnologyArtificial Intelligence: Shaping the Future of Technology
Artificial Intelligence: Shaping the Future of Technologycyberprosocial
 
artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?Inge de Waard
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?Mark Borg
 

Similar to Understanding and Mitigating Bias in AI (20)

Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it?
 
The Ethics of AI
The Ethics of AIThe Ethics of AI
The Ethics of AI
 
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?Algorithmic Bias : What is it? Why should we care? What can we do about it?
Algorithmic Bias : What is it? Why should we care? What can we do about it?
 
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...
 
AI Ethical Framework.pptx
AI Ethical Framework.pptxAI Ethical Framework.pptx
AI Ethical Framework.pptx
 
Artificial Intelligence and Ethics from MS It seems.docx
Artificial Intelligence and Ethics from MS It seems.docxArtificial Intelligence and Ethics from MS It seems.docx
Artificial Intelligence and Ethics from MS It seems.docx
 
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.
 
AI and disinfo (1).pdf
AI and disinfo (1).pdfAI and disinfo (1).pdf
AI and disinfo (1).pdf
 
Teaching Critical AI Literacies - Maha Bali
Teaching Critical AI Literacies - Maha BaliTeaching Critical AI Literacies - Maha Bali
Teaching Critical AI Literacies - Maha Bali
 
ai_and_you_slide_template.pptx
ai_and_you_slide_template.pptxai_and_you_slide_template.pptx
ai_and_you_slide_template.pptx
 
AI ETHICS.pptx
AI ETHICS.pptxAI ETHICS.pptx
AI ETHICS.pptx
 
Unit I What is Artificial Intelligence.docx
Unit I What is Artificial Intelligence.docxUnit I What is Artificial Intelligence.docx
Unit I What is Artificial Intelligence.docx
 
Tecnologías emergentes: priorizando al ciudadano
Tecnologías emergentes: priorizando al ciudadanoTecnologías emergentes: priorizando al ciudadano
Tecnologías emergentes: priorizando al ciudadano
 
Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)Ethical Issues in Machine Learning Algorithms (Part 2)
Ethical Issues in Machine Learning Algorithms (Part 2)
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
A.I.pptx
A.I.pptxA.I.pptx
A.I.pptx
 
Introduction to Data Science
Introduction to Data ScienceIntroduction to Data Science
Introduction to Data Science
 
Artificial Intelligence: Shaping the Future of Technology
Artificial Intelligence: Shaping the Future of TechnologyArtificial Intelligence: Shaping the Future of Technology
Artificial Intelligence: Shaping the Future of Technology
 
artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?artificial intelligence - in need of an ethical layer?
artificial intelligence - in need of an ethical layer?
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?
 

More from Eirini Ntoutsi

Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...Eirini Ntoutsi
 
A Machine Learning Primer,
A Machine Learning Primer,A Machine Learning Primer,
A Machine Learning Primer,Eirini Ntoutsi
 
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...Eirini Ntoutsi
 
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...
Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...Eirini Ntoutsi
 
Density-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data StreamsDensity-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data StreamsEirini Ntoutsi
 
Density Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic DataDensity Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic DataEirini Ntoutsi
 
Summarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic EnvironmentsSummarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic EnvironmentsEirini Ntoutsi
 
Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data Eirini Ntoutsi
 
Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...Eirini Ntoutsi
 

More from Eirini Ntoutsi (10)

Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
Sentiment Analysis of Social Media Content: A multi-tool for listening to you...
 
A Machine Learning Primer,
A Machine Learning Primer,A Machine Learning Primer,
A Machine Learning Primer,
 
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
(Machine)Learning with limited labels(Machine)Learning with limited labels(Ma...
 
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...
Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...Redundancies in Data and their Eect on the Evaluation of Recommendation Syst...
Redundancies in Data and their E ect on the Evaluation of Recommendation Syst...
 
Density-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data StreamsDensity-based Projected Clustering over High Dimensional Data Streams
Density-based Projected Clustering over High Dimensional Data Streams
 
Density Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic DataDensity Based Subspace Clustering Over Dynamic Data
Density Based Subspace Clustering Over Dynamic Data
 
Summarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic EnvironmentsSummarizing Cluster Evolution in Dynamic Environments
Summarizing Cluster Evolution in Dynamic Environments
 
Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data Cluster formation over huge volatile robotic data
Cluster formation over huge volatile robotic data
 
Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...Discovering and Monitoring Product Features and the Opinions on them with OPI...
Discovering and Monitoring Product Features and the Opinions on them with OPI...
 
NeeMo@OpenCoffeeXX
NeeMo@OpenCoffeeXXNeeMo@OpenCoffeeXX
NeeMo@OpenCoffeeXX
 

Recently uploaded

Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxsocialsciencegdgrohi
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonJericReyAuditor
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 

Recently uploaded (20)

Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lesson
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 

Understanding and Mitigating Bias in AI

  • 1. AI Colloquium by AAISI (online), 30.3.2021 Bias in AI systems: A multi-step approach Eirini Ntoutsi Free University Berlin
  • 2. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 3 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 3. Successful applications 4 Eirini Ntoutsi Bias in AI systems: A multi-step approach Recommendations Navigation Severe weather alerts Automation
  • 4. Questionable uses/ failures 5 Eirini Ntoutsi Bias in AI systems: A multi-step approach Google flu trends failure Microsoft’s bot Tay taken offline after racist tweets IBM’s Watson for Oncology cancelled Facial recognition works better for white males
  • 5. Why AI-projects might fail?  Back to basics: How machines learn  Machine Learning gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959)  We don’t codify the solution, we don’t even know it!  DATA & the learning algorithms are the keys 6 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data
  • 6. Mind the (hidden) assumptions  Assumptions include: stationarity, independent & identically distributed (iid) data, balanced class representation, ...  In this talk, I will focus on the assumption/myth of algorithmic objectivity 1. The common misconception that humans are subjective, but data and algorithms not and therefore they cannot discriminate. 7 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 7. Reality check: Can algorithms discriminate?  Bloomberg analysts compared Amazon same-day delivery areas with U.S. Census Bureau data  They found that in 6 major same-day delivery cities, the service area excludes predominantly black ZIP codes to varying degrees.  Shouldn’t this service be based on customer’s spend rather than race?  Amazon claimed that race was not used in their models. 8 Eirini Ntoutsi Bias in AI systems: A multi-step approach Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
  • 8. Reality check cont’: Can algorithms discriminate?  There have been already plenty of cases of algorithmic discrimination  State of the art visions systems (used e.g. in autonomous driving) recognize better white males than black women (racial and gender bias)  Google’s AdFisher tool for serving personalized ads was found to serve significantly fewer ads for high paid jobs to women than men (gender-bias)  COMPAS tool (US) for predicting a defendant’s risk of committing another crime predicted higher risks of recidivism for black defendants (and lower for white defendants) than their actual risk (racial-bias) 9 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 9. Dont blame (only) the AI  “Bias is as old as human civilization” and “it is human nature for members of the dominant majority to be oblivious to the experiences of other groups”  Human bias: a prejudice in favour of or against one thing, person, or group compared with another usually in a way that’s considered to be unfair.  Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual orientation …  Algorithmic bias: the inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair. 10 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 10. Every bias is not necessarily a bad bias 1/2  Inductive bias ”refers to a set of (explicit or implicit) assumptions made by a learning algorithm in order to perform induction, that is, to generalize a finite set of observation (training data) into a general model of the domain. Without a bias of that kind, induction would not be possible, since the observations can normally be generalized in many ways.” (Hüllermeier, Fober & Mernberger, 2013)  Bias-free learning is futile: A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. 11 Eirini Ntoutsi Bias in AI systems: A multi-step approach Model Future unknown instances
  • 11. Every bias is not necessarily a bad bias 2/2  Some biases are positive and helpful, e.g., making healthy eating choices  Some biases help us to become more efficient  E.g., start work early if you are a morning person  We refer here to bias that might cause discrimination and unfair actions to an individual or group on the basis of protected attributes like race or gender.  Bias and gender are examples of bias triggers, but not the only ones  Combined triggers as well, e.g., black females in IT jobs  Bias and discrimination depend on context  Women in tech  Males in ballet dancing 12 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 12. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 13 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 13. The fairness-aware machine learning domain  A young, fast evolving, multi-disciplinary research field  Bias/fairness/discrimination/… have been studied for long in philosophy, social sciences, law, …  Existing approaches can be divided into three categories  Understanding bias  How bias is created in the society and enters our sociotechnical systems, is manifested in the data used by AI algorithms, and can be formalized.  Mitigating bias  Approaches that tackle bias in different stages of AI-decision making.  Accounting for bias  Approaches that account for bias proactively or retroactively. 14 Eirini Ntoutsi Bias in AI systems: A multi-step approach E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder- Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in data-driven artificial intelligence systems—An introductory survey", WIREs Data Mining and Knowledge Discovery, 2020.
  • 14. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 16 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 15. Understanding bias: Sociotechnical causes of bias  AI-systems rely on data generated by humans (UGC) or collected via systems created by humans.  As a result human biases  enter AI systems  E.g., bias in word-embeddings (Bolukbasi et al, 2016)  might be amplified by complex sociotechnical systems such as the Web and,  new types of biases might be created 17 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 16. Understanding bias: How is bias manifested in data?  Protected attributes and proxies  E.g., neighborhoods in U.S. cities are highly correlated with race  Representativeness of data  E.g., underrepresentation of women and people of color in IT developer communities and image datasets  E.g., overrepresentation of black people in drug-related arrests  Depends on data modalities 18 Eirini Ntoutsi Bias in AI systems: A multi-step approach https://incitrio.com/top-3-lessons-learned-from- the-top-12-marketing-campaigns-ever/ https://ellengau.medium.com/emily-in- paris-asian-women-i-know-arent-like- mindy-chen-6228e63da333
  • 17. Typical (batch) fairness-aware learning setup  Input: D = training dataset drawn from a joint distribution P(F,S,y)  F: set of non-protected attributes  S: (typically: binary, single) protected attribute  s (s ̄): protected (non-protected) group  y = (typically: binary) class attribute {+,-} (+ for accepted, - for rejected)  Goal of fairness-aware classification: Learn a mapping from f(F, S) → y  achieves good predictive performance  eliminates discrimination 19 Eirini Ntoutsi Bias in AI systems: A multi-step approach F1 F2 S y User1 f11 f12 s + User2 f21 - User3 f31 f23 s + … … … … … Usern fn1 + We know how to measure this According to some fairness measure
  • 18. Measuring (un)fairness: some measures  Statistical parity: If subjects in both protected and unprotected groups should have equal probability of being assigned to the positive class 𝑃 ො 𝑦 = + 𝑆 = 𝑠 = 𝑃 ො 𝑦 = + 𝑆 = ҧ 𝑠  Equal opportunity: There should be no difference in model’s prediction errors regarding the positive class 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = 𝑠+ = 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = ҧ 𝑠+  Disparate Mistreatment: There should be no difference in model’s prediction errors between protected and non-protected groups for both classes 𝛿FNR = 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = 𝑠+ − 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = ҧ 𝑠+ 𝛿FPR = 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = 𝑠− − 𝑃 ො 𝑦 ≠ 𝑦 𝑆 = ҧ 𝑠− Disparate Mistreatment = 𝛿𝐹𝑁𝑅 + 𝛿𝐹𝑃𝑅 22 Eirini Ntoutsi Bias in AI systems: A multi-step approach F1 F2 S y ෝ 𝒚 User1 f11 f12 s + - User2 f21 - + User3 f31 f23 s + - … … … … … … Usern fn1 + +
  • 19. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 24 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 20. Mitigating bias  Goal: tackling bias in different stages of AI-decision making 26 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 21. Mitigating bias: pre-processing approaches  Intuition: making the data more fair will result in a less unfair model  Idea: balance the protected and non-protected groups in the dataset  Design principle: minimal data interventions (to retain data utility for the learning task)  Different techniques:  Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong, Ruggieri, & Turini, 2011)  Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders, 2012)  Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)  Synthetic instance generation (Iosifidis & Ntoutsi, 2018)  … 27 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 22. Mitigating bias: pre-processing approaches: Massaging  Change the class label of carefully selected instances (Kamiran & Calders, 2009).  The selection is based on a ranker which ranks the individuals by their probability to receive the favorable outcome.  The number of massaged instances depends on the fairness measure (group fairness) 28 Eirini Ntoutsi Bias in AI systems: A multi-step approach Image credit Vasileios Iosifidis
  • 23. Mitigating bias  Goal: tackling bias in different stages of AI-decision making 30 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 24. Mitigating bias: in-processing approaches  Intuition: working directly with the algorithm allows for better control  Idea: explicitly incorporate the model’s discrimination behavior in the objective function  Design principle: “balancing” predictive- and fairness-performance  Different techniques:  Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho, Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang & Ntoutsi, 2019)  Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)  Training on latent target labels (Krasanakis, Xioufis, Papadopoulos & Kompatsiaris, 2018)  In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)  … 31 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 25. Mitigating bias: in-processing approaches: change the objective function 1/2  An example with decision tree (DTs) classifiers  Traditional DTs use pureness of the data (in terms of class-labels) to decide on which attribute to use for splitting  Which attribute to choose for splitting? A1 or A2?  The goal is to select the attribute that is most useful for classifying examples.  helps us be more certain about the class after the split  we would like the resulting partitioning to be as pure as possible  Pureness evaluated via entropy - A partition is pure if all its instances belong to the same class.  But traditional measures (Information Gain, Gini Index, …) ignore fairness 32 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 26. Mitigating bias: in-processing approaches: change the objective function 2/2  We introduce the fairness gain of an attribute (FG)  Disc(D) corresponds to statistical parity (group fairness)  We introduce the joint criterion, fair information gain (FIG) that evaluates the suitability of a candidate splitting attribute A in terms of both predictive performance and fairness. 33 Eirini Ntoutsi Bias in AI systems: A multi-step approach D D1 D2 W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
  • 27. Mitigating bias  Goal: tackling bias in different stages of AI-decision making 34 Eirini Ntoutsi Bias in AI systems: A multi-step approach Algorithms Models Models Data Applications Hiring Banking Healthcare Education Autonomous driving … Pre-processing approaches In-processing approaches Post-processing approaches
  • 28. Mitigating bias: post-processing approaches  Intuition: start with predictive performance  Idea: first optimize the model for predictive performance and then tune for fairness  Design principle: minimal interventions (to retain model predictive performance)  Different techniques:  Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders & Verwer, 2010)  Correct the class labels (Kamiran et al., 2010)  Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt, Price, & Srebro, 2016)  Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík, Langford, & Wallach, 2018)  … 35 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 29. Mitigating bias: pοst-processing approaches: shift the decision boundary  An example of decision boundary shift 36 Eirini Ntoutsi Bias in AI systems: A multi-step approach V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
  • 30. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 37 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 31. Accounting for bias  Algorithmic accountability refers to the assignment of responsibility for how an algorithm is created and its impact on society (Kaplan et al, 2019).  Many facets of accountability for AI-driven algorithms and different approaches  Proactive approaches:  bias-aware data collection, e.g., for Web data, crowd-sourcing  Bias-description and modeling, e.g., via ontologies  ...  Retroactive approaches:  Explaining AI decisions in order to understand whether decisions are biased  What is an explanation? Explanations w.r.t. legal/ethical grounds?  Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020) 38 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 32. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 40 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 33. Fairness with sequential learners (boosting)  Sequential ensemble methods generate base learners in a sequence  The sequential generation of base learners promotes the dependence between the base learners.  Each learner learns from the mistakes of the previous predictor  The weak learners are combined to build a strong learner  Popular examples: Adaptive Boosting (AdaBoost), Extreme Gradient Boosting (XGBoost).  Our base model is AdaBoost (Freund and Schapire, 1995), a sequential ensemble method that in each round, re-weights the training data to focus on misclassified instances. 41 Eirini Ntoutsi Bias in AI systems: A multi-step approach Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H()
  • 34. Intuition behind using boosting for fairness  It is easier to make “fairness-related interventions” in simpler models rather than complex ones  We can use the whole sequence of learners for the interventions instead of the current one 44 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 35. Limitations of related work  Existing works evaluate predictive performance in terms of the overall classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]  In case of class-imbalance, ER is misleading  Most of the datasets however suffer from imbalance  Moreover, Dis.Mis. is “oblivious” to the class imbalance problem 47 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 36. From Adaboost to AdaFair  We tailor AdaBoost to fairness  We introduce the notion of cumulative fairness that assesses the fairness of the model up to the current boosting round (partial ensemble).  We directly incorporate fairness in the instance weighting process (traditionally focusing on classification performance).  We optimize the number of weak learners in the final ensemble based on balanced error rate thus directly considering class imbalance in the best model selection. 48 Eirini Ntoutsi Bias in AI systems: A multi-step approach 𝐸𝑅 = 1 − 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃 V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019.
  • 37. AdaFair: Cumulative boosting fairness  Let j: 1−T be the current boosting round, T is user defined  Let be the partial ensemble, up to current round j.  The cumulative fairness of the ensemble up to round j, is defined based on the parity in the predictions of the partial ensemble between protected and non-protected groups  ``Forcing’’ the model to consider ``historical’’ fairness over all previous rounds instead of just focusing on current round hj() results in better classifier performance and model convergence. 50 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 38. AdaFair: fairness-aware weighting of instances  Vanilla AdaBoost already boosts misclassified instances for the next round  Our weighting explicitly targets fairness by extra boosting discriminated groups for the next round  The data distribution at boosting round j+1 is updated as follows  The fairness-related cost ui of instances xi ϵ D which belong to a group that is discriminated is defined as follows: 51 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 39. AdaFair: optimizing the number of weak learners  Typically, the number of boosting rounds/ weak learners T is user-defined  We propose to select the optimal subsequence of learners 1 … θ, θ ≤ T that minimizes the balanced error rate (BER)  In particular, we consider both ER and BER in the objective function 𝑎𝑟𝑔𝑚𝑖𝑛𝜃 𝑐 ∗ 𝐵𝐸𝑅𝜃 + 1 − 𝑐 𝐸𝑅𝜃 + 𝑀𝑖𝑠. 𝐷𝑖𝑠.  The result of this optimization if a final ensemble model with Eq.Odds fairness 53 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 40. Experimental evaluation  Datasets of varying imbalance  Baselines  AdaBoost [Sch99]: vanilla AdaBoost  SMOTEBoost [CLHB03]: AdaBoost with SMOTE for imbalanced data.  Krasanakis et al. [KXPK18]: Boosting method which minimizes Dis.Mis. by approximating the underlying distribution of hidden correct labels.  Zafar et al.[ZVGRG17]: Training logistic regression model with convex-concave constraints to minimize Dis.Mis.  AdaFair NoCumul: Variation of AdaFair that computes the fairness weights based on individual weak learners. 54 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 41. Experiments: Predictive and fairness performance  Adult census income (ratio 1+:3-)  Bank dataset (ratio 1+:8-) Eirini Ntoutsi Bias in AI systems: A multi-step approach 55 Larger values are better, for Dis.Mis. lower values are better  Our method achieves high balanced accuracy and low discrimination (Dis.Mis.) while maintaining high TPRs and TNRs for both groups.  The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more positive instances (lowering TPRs).
  • 42. Cumulative vs non-cumulative fairness  Cumulative vs non-cumulative fairness impact on model performance  Cumulative notion of fairness performs better  The cumulative model (AdaFair) is more stable than its non-cumulative counterpart (standard deviation is higher) 56 Eirini Ntoutsi Bias in AI systems: A multi-step approach Please note: Eq.Odds Dis.Mis.
  • 43. Outline  Introduction  Dealing with bias in data-driven AI systems  Understanding bias  Mitigating bias  Accounting for bias  Case: bias-mitigation with sequential ensemble learners (boosting)  Wrapping up 57 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 44. Wrapping-up, ongoing work and future directions  In this talk I focused on the myth of algorithmic objectivity and  the reality of algorithmic bias and discrimination and how algorithms can pick biases existing in the input data and further reinforce them  A large body of research already exists but  focuses mainly on fully-supervised batched learning with single-protected (and typically binary) attributes with binary classes  Moving from batch learning to online learning  targets bias in some step of the analysis-pipeline, but biases/errors might be propagated and even amplified (unified approached are needed)  Moving from isolated approaches (pre-, in- or post-) to combined approaches 58 Eirini Ntoutsi Bias in AI systems: A multi-step approach V. Iosifidis, E. Ntoutsi, “FABBOO - Online Fairness-aware Learning under Class Imbalance", DS 2020. T. Hu, V. Iosifidis, W. Liao, H. Zang, M. Yang, E. Ntoutsi,B. Rosenhahn, "FairNN - Conjoint Learning of Fair Representations for Fair Decisions”, DS 2020.
  • 45. Wrapping-up, ongoing work and future directions  Moving from single-protected attribute fairness-aware learning to multi- fairness  Existing legal studies define multi-fairness as compound, intersectional and overlapping [Makkonen 2002].  Moving from fully-supervised learning to unsupervised and reinforcement learning  Moving from myopic (maximize short-term/immediate performance) solutions to non-myopic ones (that consider long-term performance) [Zhang et al,2020]  Actionable approaches (counterfactual generation) 59 Eirini Ntoutsi Bias in AI systems: A multi-step approach
  • 46. Thank you for you attention! 60 Eirini Ntoutsi Bias in AI systems: A multi-step approach Questions? https://nobias-project.eu/ @NoBIAS_ITN https://lernmint.org/ Feel free to contact me: • eirini.ntoutsi@fu-berlin.de • @entoutsi • https://www.mi.fu- berlin.de/en/inf/groups/ag-KIML/index.html https://www.bias-project.org/