SlideShare a Scribd company logo
1 of 31
Download to read offline
MODEL BIAS IN AI
17 Aug 2017
Jason Tamara Widjaja
A biased sample size of one
I am:
• Made in Singapore, 8 years overseas
• Plugging into deep tech: TNB Ventures and
Lauretta.io
I work:
• Data science and AI at MSD
• IT, IS, HR, MBA, Analytics, AI
I like:
• Building world class data science and AI teams
• “Tell me like I’m five” accessible communication
Agenda
3
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
Acting Humanly
“Turing Test”
Making sense of the AI space
4
Thinking Humanly
“Cognitive modeling”
Thinking Rationally
“Logical laws of thought”
Acting Rationally
“Intelligent agents”
Acting Humanly
“Turing Test”
Making sense of the AI space
5
Thinking Humanly
“Cognitive modeling”
Thinking Rationally
“Logical laws of thought”
Acting Rationally
“Intelligent agents”
Descriptive
analytics
Predictive
analytics
Prescriptive
analytics
AI as an extension of data science focused on replicating the capabilities of intelligent agents
Human
agency
Decision support
“Tool”
Back office
automation / RPA
Human interaction
automation
Intelligent agent
“Worker”
Autonomous
execution
Agenda
7
1 Introduction: AI in MSD
2 The Problem: Model Bias in Practice
3 Fixing Projects: Pressure Points
4 Fixing Systems: Broader Considerations
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
A thought experiment: What does being on the receiving end of a model feel like?
“How can I assign my people to different training programs?”
40,000 people
4 categories of 100 features each
4 tiers of performance: low, medium, high, v.high
Different development tracks
Data collection
Supervised learning
Clustering within each tier
A thought experiment: What does being on the receiving end of a model feel like?
“How can I assign my people to different training programs?”
40,000 people
4 categories of 100 features each
Different development tracks
Data collection
Supervised learning
Clustering within each tier
4 tiers of performance: low, medium, high, v.high
A thought experiment: What does being on the receiving end of a model feel like?
“How can I assign my people to different training programs?”
40,000 people
4 categories of 100 features each
4 tiers: low, medium, high, talent
Different development tracks
Data collection
Supervised learning
Clustering within each tier
Big questions:
• Did the model favour one ‘group’ over
another?
• Did the model use information besides real
performance in its decision?
• Is the model “fair”?
11
Now what if the model was fully automated?
No appeals. No explanations. No human intervention.
Multiple studies have found evidence that automated, opaque models systemically
disadvantage certain demographic groups at scale
Google’s search results for ‘CEO’ – 14 August 2017
If we deploy models trained on existing, biased data sources, we may be unknowingly
perpetuating discrimination
15
Models do not automatically remove bias - used carelessly they may
systematize it
Agenda
16
1 Introduction: AI in MSD
2 The Problem: Model Bias in Practice
3 Fixing Projects: Pressure Points
4 Fixing Systems: Broader Considerations
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
A 4-part framework for mitigating model bias
2. MODEL
INTERPRETABILITY
1. DEVELOPER
CHOICE
3. MANAGEMENT
POLICY
4. USER
INTERPRETATION
More transparent tool
“Make the black box less black”
More skillful users
“Make black box users more proficient”
A 4-part framework for mitigating model bias
2. MODEL
INTERPRETABILITY
1. DEVELOPER
CHOICE
3. MANAGEMENT
POLICY
4. USER
INTERPRETATION
More transparent tool
“Make the black box less black”
More skillful users
“Make black box users more proficient”
A 4-part framework for mitigating model bias
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
“Build models thoughtfully for safer, more understandable models”
Choose your features consciously
A 4-part framework for mitigating model bias
Choose your features consciously Trade predictive power for more interpretable models
“Build models thoughtfully for safer, more understandable models”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
“Add forensics to models to build trust”
QII for understandable features
(Quantitative Input Influence)
LIME for non-understandable features
(Local Interpretable Model-agnostic
Explanations)
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
Key lime (QII – LIME) pie
A 4-part framework for mitigating model bias
QII for understandable features
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
Personal Transparency Reports
Why was person X classified this way?
Group Impact Diagnostics
When I add feature X, what changes?
Model Diagnostics
Which features drive model X?
“Add forensics to models to build trust”
A 4-part framework for mitigating model bias
LIME for non-understandable features: GOOD MODEL
“Add forensics after modeling so approaches are less constrained”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
LIME for non-understandable features: BAD MODEL
“Add forensics after modeling so approaches are less constrained”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
2. MODEL
INTERPRETABILITY
1. DEVELOPER
CHOICE
3. MANAGEMENT
POLICY
4. USER
INTERPRETATION
More transparent tool
“Make the black box less black”
More skillful users
“Make black box users more proficient”
A 4-part framework for mitigating model bias
“Have a management conversation – what is a fair model?
Optimise for more than ROI”
Demographic
Parity
X% of men get a chance
X% of women get a chance
Equal
Opportunity
X% of qualified men get a chance
X% of qualified women get a chance
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
“Train users of AI systems in model diagnostics and AI quality assurance”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
Agenda
28
1 Introduction: AI in MSD
2 The Problem: Model Bias in Practice
3 Fixing Projects: Pressure Points
4 Fixing Systems: Broader Considerations
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
5 VALUES FROM THE ASILOMAR AI PRINCIPLES
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of
their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can
be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human
dignity, rights, freedoms, and cultural diversity.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of
humanity.
Jaan Tallinn, Co-
founder, Skype
Viktoriya Krakovna, AI
Safety, DeepMind
Nick Bostrom,
Director, Oxford
Future of Humanity
Institute
Erik Brynjolfsson,
Director, MIT Center
for Digital Business,
MIT
Stephen Hawking,
Director of Research,
Centre for Theoretical
Cosmology,
Cambridge University
Stuart Russell,
Professor of AI, UC
Berkeley
Elon Musk, Founder,
SpaceX and Tesla
Motors
Appendix and references
1. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should I trust you?: Explaining the predictions of
any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining (pp. 1135-1144). ACM.
2. Datta, A., Sen, S., & Zick, Y. (2017). Algorithmic transparency via quantitative input influence. In Transparent
Data Mining for Big and Small Data (pp. 71-94). Springer International Publishing.
3. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in
Neural Information Processing Systems (pp. 3315-3323).
4. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency
(DARPA), nd Web.
5. AI Principles. (n.d.). Retrieved August 10, 2017, from https://futureoflife.org/ai-principles/
30
31
Our tools need to be sharper, but they also need to be safer
Industry 4.0 needs to grow up alongside ethics 4.0

More Related Content

What's hot

Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? University of Minnesota, Duluth
 
Responsible AI
Responsible AIResponsible AI
Responsible AINeo4j
 
A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception Dr. Kim (Kyllesbech Larsen)
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learningDaniel Wilson
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?Mark Borg
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
 
Technology for everyone - AI ethics and Bias
Technology for everyone - AI ethics and BiasTechnology for everyone - AI ethics and Bias
Technology for everyone - AI ethics and BiasMarion Mulder
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Responsible AI
Responsible AIResponsible AI
Responsible AIAnand Rao
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)GoDataDriven
 
Explainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AIExplainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AIepsilon_tud
 
Generative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveGenerative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
 
2.17Mb ppt
2.17Mb ppt2.17Mb ppt
2.17Mb pptbutest
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Krishnaram Kenthapadi
 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI Kalilur Rahman
 

What's hot (20)

Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it?
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learning
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?How do we train AI to be Ethical and Unbiased?
How do we train AI to be Ethical and Unbiased?
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
 
Technology for everyone - AI ethics and Bias
Technology for everyone - AI ethics and BiasTechnology for everyone - AI ethics and Bias
Technology for everyone - AI ethics and Bias
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)
 
Explainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AIExplainable AI is not yet Understandable AI
Explainable AI is not yet Understandable AI
 
Generative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveGenerative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's Perspective
 
2.17Mb ppt
2.17Mb ppt2.17Mb ppt
2.17Mb ppt
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI
 

Similar to Model bias in AI

20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptxISSIP
 
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu RamachandranDataScienceConferenc1
 
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...CareerBuilder
 
Sweeny group think-ias2015
Sweeny group think-ias2015Sweeny group think-ias2015
Sweeny group think-ias2015Marianne Sweeny
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
 
Deciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AIDeciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AILiming Zhu
 
Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning  Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning Robert Farrow
 
How to create a taxonomy for management buy-in
How to create a taxonomy for management buy-inHow to create a taxonomy for management buy-in
How to create a taxonomy for management buy-inMary Chitty
 
Hcic muller and liao - participatory design fictions
Hcic   muller and liao - participatory design fictionsHcic   muller and liao - participatory design fictions
Hcic muller and liao - participatory design fictionsMichael Muller
 
Jordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data ScientistJordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data ScientistCybera Inc.
 
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...DataScienceConferenc1
 
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...Cataldo Musto
 
Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?Emiliano Soldi
 
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...James Anderson
 
Scaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian PerpectiveScaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian PerpectiveAmit Ranjan
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...ijtsrd
 
Analytics: The widening divide
Analytics: The widening divideAnalytics: The widening divide
Analytics: The widening divideBPMSinfo
 
Design considerations for machine learning system
Design considerations for machine learning systemDesign considerations for machine learning system
Design considerations for machine learning systemAkemi Tazaki
 

Similar to Model bias in AI (20)

20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
 
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
 
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
 
Sweeny group think-ias2015
Sweeny group think-ias2015Sweeny group think-ias2015
Sweeny group think-ias2015
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
 
Deciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AIDeciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AI
 
Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning  Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning
 
How to create a taxonomy for management buy-in
How to create a taxonomy for management buy-inHow to create a taxonomy for management buy-in
How to create a taxonomy for management buy-in
 
Hcic muller and liao - participatory design fictions
Hcic   muller and liao - participatory design fictionsHcic   muller and liao - participatory design fictions
Hcic muller and liao - participatory design fictions
 
Jordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data ScientistJordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data Scientist
 
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
 
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
 
Final Report.pdf
Final Report.pdfFinal Report.pdf
Final Report.pdf
 
Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?
 
A.I.pptx
A.I.pptxA.I.pptx
A.I.pptx
 
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
 
Scaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian PerpectiveScaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian Perpective
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
 
Analytics: The widening divide
Analytics: The widening divideAnalytics: The widening divide
Analytics: The widening divide
 
Design considerations for machine learning system
Design considerations for machine learning systemDesign considerations for machine learning system
Design considerations for machine learning system
 

Recently uploaded

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 

Recently uploaded (20)

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 

Model bias in AI

  • 1. MODEL BIAS IN AI 17 Aug 2017 Jason Tamara Widjaja
  • 2. A biased sample size of one I am: • Made in Singapore, 8 years overseas • Plugging into deep tech: TNB Ventures and Lauretta.io I work: • Data science and AI at MSD • IT, IS, HR, MBA, Analytics, AI I like: • Building world class data science and AI teams • “Tell me like I’m five” accessible communication
  • 3. Agenda 3 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 4. Acting Humanly “Turing Test” Making sense of the AI space 4 Thinking Humanly “Cognitive modeling” Thinking Rationally “Logical laws of thought” Acting Rationally “Intelligent agents”
  • 5. Acting Humanly “Turing Test” Making sense of the AI space 5 Thinking Humanly “Cognitive modeling” Thinking Rationally “Logical laws of thought” Acting Rationally “Intelligent agents”
  • 6. Descriptive analytics Predictive analytics Prescriptive analytics AI as an extension of data science focused on replicating the capabilities of intelligent agents Human agency Decision support “Tool” Back office automation / RPA Human interaction automation Intelligent agent “Worker” Autonomous execution
  • 7. Agenda 7 1 Introduction: AI in MSD 2 The Problem: Model Bias in Practice 3 Fixing Projects: Pressure Points 4 Fixing Systems: Broader Considerations 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 8. A thought experiment: What does being on the receiving end of a model feel like? “How can I assign my people to different training programs?” 40,000 people 4 categories of 100 features each 4 tiers of performance: low, medium, high, v.high Different development tracks Data collection Supervised learning Clustering within each tier
  • 9. A thought experiment: What does being on the receiving end of a model feel like? “How can I assign my people to different training programs?” 40,000 people 4 categories of 100 features each Different development tracks Data collection Supervised learning Clustering within each tier 4 tiers of performance: low, medium, high, v.high
  • 10. A thought experiment: What does being on the receiving end of a model feel like? “How can I assign my people to different training programs?” 40,000 people 4 categories of 100 features each 4 tiers: low, medium, high, talent Different development tracks Data collection Supervised learning Clustering within each tier Big questions: • Did the model favour one ‘group’ over another? • Did the model use information besides real performance in its decision? • Is the model “fair”?
  • 11. 11 Now what if the model was fully automated? No appeals. No explanations. No human intervention.
  • 12. Multiple studies have found evidence that automated, opaque models systemically disadvantage certain demographic groups at scale
  • 13. Google’s search results for ‘CEO’ – 14 August 2017
  • 14. If we deploy models trained on existing, biased data sources, we may be unknowingly perpetuating discrimination
  • 15. 15 Models do not automatically remove bias - used carelessly they may systematize it
  • 16. Agenda 16 1 Introduction: AI in MSD 2 The Problem: Model Bias in Practice 3 Fixing Projects: Pressure Points 4 Fixing Systems: Broader Considerations 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 17. A 4-part framework for mitigating model bias 2. MODEL INTERPRETABILITY 1. DEVELOPER CHOICE 3. MANAGEMENT POLICY 4. USER INTERPRETATION More transparent tool “Make the black box less black” More skillful users “Make black box users more proficient”
  • 18. A 4-part framework for mitigating model bias 2. MODEL INTERPRETABILITY 1. DEVELOPER CHOICE 3. MANAGEMENT POLICY 4. USER INTERPRETATION More transparent tool “Make the black box less black” More skillful users “Make black box users more proficient”
  • 19. A 4-part framework for mitigating model bias MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION “Build models thoughtfully for safer, more understandable models” Choose your features consciously
  • 20. A 4-part framework for mitigating model bias Choose your features consciously Trade predictive power for more interpretable models “Build models thoughtfully for safer, more understandable models” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 21. A 4-part framework for mitigating model bias “Add forensics to models to build trust” QII for understandable features (Quantitative Input Influence) LIME for non-understandable features (Local Interpretable Model-agnostic Explanations) MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION Key lime (QII – LIME) pie
  • 22. A 4-part framework for mitigating model bias QII for understandable features MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION Personal Transparency Reports Why was person X classified this way? Group Impact Diagnostics When I add feature X, what changes? Model Diagnostics Which features drive model X? “Add forensics to models to build trust”
  • 23. A 4-part framework for mitigating model bias LIME for non-understandable features: GOOD MODEL “Add forensics after modeling so approaches are less constrained” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 24. A 4-part framework for mitigating model bias LIME for non-understandable features: BAD MODEL “Add forensics after modeling so approaches are less constrained” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 25. A 4-part framework for mitigating model bias 2. MODEL INTERPRETABILITY 1. DEVELOPER CHOICE 3. MANAGEMENT POLICY 4. USER INTERPRETATION More transparent tool “Make the black box less black” More skillful users “Make black box users more proficient”
  • 26. A 4-part framework for mitigating model bias “Have a management conversation – what is a fair model? Optimise for more than ROI” Demographic Parity X% of men get a chance X% of women get a chance Equal Opportunity X% of qualified men get a chance X% of qualified women get a chance MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 27. A 4-part framework for mitigating model bias “Train users of AI systems in model diagnostics and AI quality assurance” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 28. Agenda 28 1 Introduction: AI in MSD 2 The Problem: Model Bias in Practice 3 Fixing Projects: Pressure Points 4 Fixing Systems: Broader Considerations 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 29. 5 VALUES FROM THE ASILOMAR AI PRINCIPLES 7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why. 9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications. 10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. 11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. 15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. Jaan Tallinn, Co- founder, Skype Viktoriya Krakovna, AI Safety, DeepMind Nick Bostrom, Director, Oxford Future of Humanity Institute Erik Brynjolfsson, Director, MIT Center for Digital Business, MIT Stephen Hawking, Director of Research, Centre for Theoretical Cosmology, Cambridge University Stuart Russell, Professor of AI, UC Berkeley Elon Musk, Founder, SpaceX and Tesla Motors
  • 30. Appendix and references 1. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM. 2. Datta, A., Sen, S., & Zick, Y. (2017). Algorithmic transparency via quantitative input influence. In Transparent Data Mining for Big and Small Data (pp. 71-94). Springer International Publishing. 3. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (pp. 3315-3323). 4. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web. 5. AI Principles. (n.d.). Retrieved August 10, 2017, from https://futureoflife.org/ai-principles/ 30
  • 31. 31 Our tools need to be sharper, but they also need to be safer Industry 4.0 needs to grow up alongside ethics 4.0