SlideShare a Scribd company logo
MODEL BIAS IN AI
17 Aug 2017
Jason Tamara Widjaja
A biased sample size of one
I am:
• Made in Singapore, 8 years overseas
• Plugging into deep tech: TNB Ventures and
Lauretta.io
I work:
• Data science and AI at MSD
• IT, IS, HR, MBA, Analytics, AI
I like:
• Building world class data science and AI teams
• “Tell me like I’m five” accessible communication
Agenda
3
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
Acting Humanly
“Turing Test”
Making sense of the AI space
4
Thinking Humanly
“Cognitive modeling”
Thinking Rationally
“Logical laws of thought”
Acting Rationally
“Intelligent agents”
Acting Humanly
“Turing Test”
Making sense of the AI space
5
Thinking Humanly
“Cognitive modeling”
Thinking Rationally
“Logical laws of thought”
Acting Rationally
“Intelligent agents”
Descriptive
analytics
Predictive
analytics
Prescriptive
analytics
AI as an extension of data science focused on replicating the capabilities of intelligent agents
Human
agency
Decision support
“Tool”
Back office
automation / RPA
Human interaction
automation
Intelligent agent
“Worker”
Autonomous
execution
Agenda
7
1 Introduction: AI in MSD
2 The Problem: Model Bias in Practice
3 Fixing Projects: Pressure Points
4 Fixing Systems: Broader Considerations
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
A thought experiment: What does being on the receiving end of a model feel like?
“How can I assign my people to different training programs?”
40,000 people
4 categories of 100 features each
4 tiers of performance: low, medium, high, v.high
Different development tracks
Data collection
Supervised learning
Clustering within each tier
A thought experiment: What does being on the receiving end of a model feel like?
“How can I assign my people to different training programs?”
40,000 people
4 categories of 100 features each
Different development tracks
Data collection
Supervised learning
Clustering within each tier
4 tiers of performance: low, medium, high, v.high
A thought experiment: What does being on the receiving end of a model feel like?
“How can I assign my people to different training programs?”
40,000 people
4 categories of 100 features each
4 tiers: low, medium, high, talent
Different development tracks
Data collection
Supervised learning
Clustering within each tier
Big questions:
• Did the model favour one ‘group’ over
another?
• Did the model use information besides real
performance in its decision?
• Is the model “fair”?
11
Now what if the model was fully automated?
No appeals. No explanations. No human intervention.
Multiple studies have found evidence that automated, opaque models systemically
disadvantage certain demographic groups at scale
Google’s search results for ‘CEO’ – 14 August 2017
If we deploy models trained on existing, biased data sources, we may be unknowingly
perpetuating discrimination
15
Models do not automatically remove bias - used carelessly they may
systematize it
Agenda
16
1 Introduction: AI in MSD
2 The Problem: Model Bias in Practice
3 Fixing Projects: Pressure Points
4 Fixing Systems: Broader Considerations
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
A 4-part framework for mitigating model bias
2. MODEL
INTERPRETABILITY
1. DEVELOPER
CHOICE
3. MANAGEMENT
POLICY
4. USER
INTERPRETATION
More transparent tool
“Make the black box less black”
More skillful users
“Make black box users more proficient”
A 4-part framework for mitigating model bias
2. MODEL
INTERPRETABILITY
1. DEVELOPER
CHOICE
3. MANAGEMENT
POLICY
4. USER
INTERPRETATION
More transparent tool
“Make the black box less black”
More skillful users
“Make black box users more proficient”
A 4-part framework for mitigating model bias
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
“Build models thoughtfully for safer, more understandable models”
Choose your features consciously
A 4-part framework for mitigating model bias
Choose your features consciously Trade predictive power for more interpretable models
“Build models thoughtfully for safer, more understandable models”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
“Add forensics to models to build trust”
QII for understandable features
(Quantitative Input Influence)
LIME for non-understandable features
(Local Interpretable Model-agnostic
Explanations)
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
Key lime (QII – LIME) pie
A 4-part framework for mitigating model bias
QII for understandable features
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
Personal Transparency Reports
Why was person X classified this way?
Group Impact Diagnostics
When I add feature X, what changes?
Model Diagnostics
Which features drive model X?
“Add forensics to models to build trust”
A 4-part framework for mitigating model bias
LIME for non-understandable features: GOOD MODEL
“Add forensics after modeling so approaches are less constrained”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
LIME for non-understandable features: BAD MODEL
“Add forensics after modeling so approaches are less constrained”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
2. MODEL
INTERPRETABILITY
1. DEVELOPER
CHOICE
3. MANAGEMENT
POLICY
4. USER
INTERPRETATION
More transparent tool
“Make the black box less black”
More skillful users
“Make black box users more proficient”
A 4-part framework for mitigating model bias
“Have a management conversation – what is a fair model?
Optimise for more than ROI”
Demographic
Parity
X% of men get a chance
X% of women get a chance
Equal
Opportunity
X% of qualified men get a chance
X% of qualified women get a chance
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
A 4-part framework for mitigating model bias
“Train users of AI systems in model diagnostics and AI quality assurance”
MODEL
INTERPRETABILITY
DEVELOPER
CHOICE
MANAGEMENT
POLICY
USER
INTERPRETATION
Agenda
28
1 Introduction: AI in MSD
2 The Problem: Model Bias in Practice
3 Fixing Projects: Pressure Points
4 Fixing Systems: Broader Considerations
1 Introduction: What Is AI in MSD?
2 The Problem: Model Bias in Practice
3 The Solution: A Framework for Mitigating Model Bias
4 Looking Ahead: Towards A Safer AI-enabled Future
5 VALUES FROM THE ASILOMAR AI PRINCIPLES
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of
their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can
be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human
dignity, rights, freedoms, and cultural diversity.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of
humanity.
Jaan Tallinn, Co-
founder, Skype
Viktoriya Krakovna, AI
Safety, DeepMind
Nick Bostrom,
Director, Oxford
Future of Humanity
Institute
Erik Brynjolfsson,
Director, MIT Center
for Digital Business,
MIT
Stephen Hawking,
Director of Research,
Centre for Theoretical
Cosmology,
Cambridge University
Stuart Russell,
Professor of AI, UC
Berkeley
Elon Musk, Founder,
SpaceX and Tesla
Motors
Appendix and references
1. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should I trust you?: Explaining the predictions of
any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining (pp. 1135-1144). ACM.
2. Datta, A., Sen, S., & Zick, Y. (2017). Algorithmic transparency via quantitative input influence. In Transparent
Data Mining for Big and Small Data (pp. 71-94). Springer International Publishing.
3. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in
Neural Information Processing Systems (pp. 3315-3323).
4. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency
(DARPA), nd Web.
5. AI Principles. (n.d.). Retrieved August 10, 2017, from https://futureoflife.org/ai-principles/
30
31
Our tools need to be sharper, but they also need to be safer
Industry 4.0 needs to grow up alongside ethics 4.0

More Related Content

What's hot

Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
Bill Liu
 
AI Governance – The Responsible Use of AI
AI Governance – The Responsible Use of AIAI Governance – The Responsible Use of AI
AI Governance – The Responsible Use of AI
NUS-ISS
 
Generative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxGenerative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptx
Colleen Farrelly
 
14 2 2023 - AI & Marketing - Hugues Rey.pdf
14 2 2023 - AI & Marketing - Hugues Rey.pdf14 2 2023 - AI & Marketing - Hugues Rey.pdf
14 2 2023 - AI & Marketing - Hugues Rey.pdf
Hugues Rey
 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI
Kalilur Rahman
 
Data Analytics and Artificial Intelligence in the era of Digital Transformation
Data Analytics and Artificial Intelligence in the era of Digital TransformationData Analytics and Artificial Intelligence in the era of Digital Transformation
Data Analytics and Artificial Intelligence in the era of Digital Transformation
Jan Wiegelmann
 
AI Overview and Capabilities
AI Overview and CapabilitiesAI Overview and Capabilities
AI Overview and Capabilities
AnandSRao1962
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models Bootcamp
Data Science Dojo
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
Krishnaram Kenthapadi
 
Generative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdfGenerative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdf
Liming Zhu
 
Responsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillarsResponsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillars
Sofus Macskássy
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
Data Con LA
 
Artificial Intelligence and Bias
Artificial Intelligence and BiasArtificial Intelligence and Bias
Artificial Intelligence and Bias
Oleksandr Krakovetskyi
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Krishnaram Kenthapadi
 
Exploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdfExploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdf
Dung Hoang
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learning
Daniel Wilson
 
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
VINCI Digital - Industrial IoT (IIoT) Strategic Advisory
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift Conference
 
GenAI in Research with Responsible AI
GenAI in Researchwith Responsible AIGenAI in Researchwith Responsible AI
GenAI in Research with Responsible AI
Liming Zhu
 
Generative AI Use-cases for Enterprise - First Session
Generative AI Use-cases for Enterprise - First SessionGenerative AI Use-cases for Enterprise - First Session
Generative AI Use-cases for Enterprise - First Session
Gene Leybzon
 

What's hot (20)

Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
AI Governance – The Responsible Use of AI
AI Governance – The Responsible Use of AIAI Governance – The Responsible Use of AI
AI Governance – The Responsible Use of AI
 
Generative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxGenerative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptx
 
14 2 2023 - AI & Marketing - Hugues Rey.pdf
14 2 2023 - AI & Marketing - Hugues Rey.pdf14 2 2023 - AI & Marketing - Hugues Rey.pdf
14 2 2023 - AI & Marketing - Hugues Rey.pdf
 
Ethics in the use of Data & AI
Ethics in the use of Data & AI Ethics in the use of Data & AI
Ethics in the use of Data & AI
 
Data Analytics and Artificial Intelligence in the era of Digital Transformation
Data Analytics and Artificial Intelligence in the era of Digital TransformationData Analytics and Artificial Intelligence in the era of Digital Transformation
Data Analytics and Artificial Intelligence in the era of Digital Transformation
 
AI Overview and Capabilities
AI Overview and CapabilitiesAI Overview and Capabilities
AI Overview and Capabilities
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models Bootcamp
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Generative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdfGenerative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdf
 
Responsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillarsResponsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillars
 
Responsible AI
Responsible AIResponsible AI
Responsible AI
 
Artificial Intelligence and Bias
Artificial Intelligence and BiasArtificial Intelligence and Bias
Artificial Intelligence and Bias
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
 
Exploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdfExploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdf
 
Introduction to the ethics of machine learning
Introduction to the ethics of machine learningIntroduction to the ethics of machine learning
Introduction to the ethics of machine learning
 
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
 
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
 
GenAI in Research with Responsible AI
GenAI in Researchwith Responsible AIGenAI in Researchwith Responsible AI
GenAI in Research with Responsible AI
 
Generative AI Use-cases for Enterprise - First Session
Generative AI Use-cases for Enterprise - First SessionGenerative AI Use-cases for Enterprise - First Session
Generative AI Use-cases for Enterprise - First Session
 

Similar to Model bias in AI

20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
ISSIP
 
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
DataScienceConferenc1
 
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
CareerBuilder
 
Sweeny group think-ias2015
Sweeny group think-ias2015Sweeny group think-ias2015
Sweeny group think-ias2015
Marianne Sweeny
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
Animesh Singh
 
Deciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AIDeciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AI
Liming Zhu
 
Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning  Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning
Robert Farrow
 
How to create a taxonomy for management buy-in
How to create a taxonomy for management buy-inHow to create a taxonomy for management buy-in
How to create a taxonomy for management buy-in
Mary Chitty
 
Hcic muller and liao - participatory design fictions
Hcic   muller and liao - participatory design fictionsHcic   muller and liao - participatory design fictions
Hcic muller and liao - participatory design fictions
Michael Muller
 
Jordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data ScientistJordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data Scientist
Cybera Inc.
 
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
DataScienceConferenc1
 
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Cataldo Musto
 
Final Report.pdf
Final Report.pdfFinal Report.pdf
Final Report.pdf
NicholasThomsen2
 
Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?
Emiliano Soldi
 
A.I.pptx
A.I.pptxA.I.pptx
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
James Anderson
 
Scaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian PerpectiveScaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian PerpectiveAmit Ranjan
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
ijtsrd
 
Analytics: The widening divide
Analytics: The widening divideAnalytics: The widening divide
Analytics: The widening divide
BPMSinfo
 
Design considerations for machine learning system
Design considerations for machine learning systemDesign considerations for machine learning system
Design considerations for machine learning system
Akemi Tazaki
 

Similar to Model bias in AI (20)

20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
 
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
 
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
 
Sweeny group think-ias2015
Sweeny group think-ias2015Sweeny group think-ias2015
Sweeny group think-ias2015
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
 
Deciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AIDeciphering AI: Human Expertise in the Age of Evolving AI
Deciphering AI: Human Expertise in the Age of Evolving AI
 
Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning  Show & TEL Ethics & Technology-Enhanced Learning
Show & TEL Ethics & Technology-Enhanced Learning
 
How to create a taxonomy for management buy-in
How to create a taxonomy for management buy-inHow to create a taxonomy for management buy-in
How to create a taxonomy for management buy-in
 
Hcic muller and liao - participatory design fictions
Hcic   muller and liao - participatory design fictionsHcic   muller and liao - participatory design fictions
Hcic muller and liao - participatory design fictions
 
Jordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data ScientistJordan Engbers - Making an Effective Data Scientist
Jordan Engbers - Making an Effective Data Scientist
 
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
 
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
Semantics-aware Techniques for Social Media Analysis, User Modeling and Recom...
 
Final Report.pdf
Final Report.pdfFinal Report.pdf
Final Report.pdf
 
Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?Agile and Generative AI - friends or foe?
Agile and Generative AI - friends or foe?
 
A.I.pptx
A.I.pptxA.I.pptx
A.I.pptx
 
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...
 
Scaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian PerpectiveScaling SlideShare to the World - An Asian Perpective
Scaling SlideShare to the World - An Asian Perpective
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
 
Analytics: The widening divide
Analytics: The widening divideAnalytics: The widening divide
Analytics: The widening divide
 
Design considerations for machine learning system
Design considerations for machine learning systemDesign considerations for machine learning system
Design considerations for machine learning system
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 

Model bias in AI

  • 1. MODEL BIAS IN AI 17 Aug 2017 Jason Tamara Widjaja
  • 2. A biased sample size of one I am: • Made in Singapore, 8 years overseas • Plugging into deep tech: TNB Ventures and Lauretta.io I work: • Data science and AI at MSD • IT, IS, HR, MBA, Analytics, AI I like: • Building world class data science and AI teams • “Tell me like I’m five” accessible communication
  • 3. Agenda 3 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 4. Acting Humanly “Turing Test” Making sense of the AI space 4 Thinking Humanly “Cognitive modeling” Thinking Rationally “Logical laws of thought” Acting Rationally “Intelligent agents”
  • 5. Acting Humanly “Turing Test” Making sense of the AI space 5 Thinking Humanly “Cognitive modeling” Thinking Rationally “Logical laws of thought” Acting Rationally “Intelligent agents”
  • 6. Descriptive analytics Predictive analytics Prescriptive analytics AI as an extension of data science focused on replicating the capabilities of intelligent agents Human agency Decision support “Tool” Back office automation / RPA Human interaction automation Intelligent agent “Worker” Autonomous execution
  • 7. Agenda 7 1 Introduction: AI in MSD 2 The Problem: Model Bias in Practice 3 Fixing Projects: Pressure Points 4 Fixing Systems: Broader Considerations 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 8. A thought experiment: What does being on the receiving end of a model feel like? “How can I assign my people to different training programs?” 40,000 people 4 categories of 100 features each 4 tiers of performance: low, medium, high, v.high Different development tracks Data collection Supervised learning Clustering within each tier
  • 9. A thought experiment: What does being on the receiving end of a model feel like? “How can I assign my people to different training programs?” 40,000 people 4 categories of 100 features each Different development tracks Data collection Supervised learning Clustering within each tier 4 tiers of performance: low, medium, high, v.high
  • 10. A thought experiment: What does being on the receiving end of a model feel like? “How can I assign my people to different training programs?” 40,000 people 4 categories of 100 features each 4 tiers: low, medium, high, talent Different development tracks Data collection Supervised learning Clustering within each tier Big questions: • Did the model favour one ‘group’ over another? • Did the model use information besides real performance in its decision? • Is the model “fair”?
  • 11. 11 Now what if the model was fully automated? No appeals. No explanations. No human intervention.
  • 12. Multiple studies have found evidence that automated, opaque models systemically disadvantage certain demographic groups at scale
  • 13. Google’s search results for ‘CEO’ – 14 August 2017
  • 14. If we deploy models trained on existing, biased data sources, we may be unknowingly perpetuating discrimination
  • 15. 15 Models do not automatically remove bias - used carelessly they may systematize it
  • 16. Agenda 16 1 Introduction: AI in MSD 2 The Problem: Model Bias in Practice 3 Fixing Projects: Pressure Points 4 Fixing Systems: Broader Considerations 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 17. A 4-part framework for mitigating model bias 2. MODEL INTERPRETABILITY 1. DEVELOPER CHOICE 3. MANAGEMENT POLICY 4. USER INTERPRETATION More transparent tool “Make the black box less black” More skillful users “Make black box users more proficient”
  • 18. A 4-part framework for mitigating model bias 2. MODEL INTERPRETABILITY 1. DEVELOPER CHOICE 3. MANAGEMENT POLICY 4. USER INTERPRETATION More transparent tool “Make the black box less black” More skillful users “Make black box users more proficient”
  • 19. A 4-part framework for mitigating model bias MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION “Build models thoughtfully for safer, more understandable models” Choose your features consciously
  • 20. A 4-part framework for mitigating model bias Choose your features consciously Trade predictive power for more interpretable models “Build models thoughtfully for safer, more understandable models” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 21. A 4-part framework for mitigating model bias “Add forensics to models to build trust” QII for understandable features (Quantitative Input Influence) LIME for non-understandable features (Local Interpretable Model-agnostic Explanations) MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION Key lime (QII – LIME) pie
  • 22. A 4-part framework for mitigating model bias QII for understandable features MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION Personal Transparency Reports Why was person X classified this way? Group Impact Diagnostics When I add feature X, what changes? Model Diagnostics Which features drive model X? “Add forensics to models to build trust”
  • 23. A 4-part framework for mitigating model bias LIME for non-understandable features: GOOD MODEL “Add forensics after modeling so approaches are less constrained” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 24. A 4-part framework for mitigating model bias LIME for non-understandable features: BAD MODEL “Add forensics after modeling so approaches are less constrained” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 25. A 4-part framework for mitigating model bias 2. MODEL INTERPRETABILITY 1. DEVELOPER CHOICE 3. MANAGEMENT POLICY 4. USER INTERPRETATION More transparent tool “Make the black box less black” More skillful users “Make black box users more proficient”
  • 26. A 4-part framework for mitigating model bias “Have a management conversation – what is a fair model? Optimise for more than ROI” Demographic Parity X% of men get a chance X% of women get a chance Equal Opportunity X% of qualified men get a chance X% of qualified women get a chance MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 27. A 4-part framework for mitigating model bias “Train users of AI systems in model diagnostics and AI quality assurance” MODEL INTERPRETABILITY DEVELOPER CHOICE MANAGEMENT POLICY USER INTERPRETATION
  • 28. Agenda 28 1 Introduction: AI in MSD 2 The Problem: Model Bias in Practice 3 Fixing Projects: Pressure Points 4 Fixing Systems: Broader Considerations 1 Introduction: What Is AI in MSD? 2 The Problem: Model Bias in Practice 3 The Solution: A Framework for Mitigating Model Bias 4 Looking Ahead: Towards A Safer AI-enabled Future
  • 29. 5 VALUES FROM THE ASILOMAR AI PRINCIPLES 7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why. 9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications. 10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. 11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. 15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. Jaan Tallinn, Co- founder, Skype Viktoriya Krakovna, AI Safety, DeepMind Nick Bostrom, Director, Oxford Future of Humanity Institute Erik Brynjolfsson, Director, MIT Center for Digital Business, MIT Stephen Hawking, Director of Research, Centre for Theoretical Cosmology, Cambridge University Stuart Russell, Professor of AI, UC Berkeley Elon Musk, Founder, SpaceX and Tesla Motors
  • 30. Appendix and references 1. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM. 2. Datta, A., Sen, S., & Zick, Y. (2017). Algorithmic transparency via quantitative input influence. In Transparent Data Mining for Big and Small Data (pp. 71-94). Springer International Publishing. 3. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (pp. 3315-3323). 4. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web. 5. AI Principles. (n.d.). Retrieved August 10, 2017, from https://futureoflife.org/ai-principles/ 30
  • 31. 31 Our tools need to be sharper, but they also need to be safer Industry 4.0 needs to grow up alongside ethics 4.0