SlideShare a Scribd company logo
1 of 41
@darkicebeam
35 %
65 %
Trust
Trust in AI Don't trust in AI
How confident are consumers
with how organizations
implement AI?
77 %
23 %
Responsibility
Yes No
Should organizations be held
accountable for AI misuse?
Sources:
Accenture 2022 Tech Vision Research https://www.accenture.com/dk-en/insights/technology/technology-trends-2022
Accenture 2019 Global Risk Study https://www.accenture.com/us-en/insights/financial-services/global-risk-study
It is an approach to evaluating, developing and implementing
AI systems in a safe, reliable and ethical manner, and making
responsible decisions and actions.
Generally speaking, Responsible AI is the practice of upholding the principles of AI when
designing, building, and using artificial intelligence systems.
smartnoise.org
Differential privacy adds noise so the maximum impact of an
individual on the outcome of an aggregative analysis is at
most epsilon (ϵ)​
 The incremental privacy risk between opting out vs participation for any individual is
governed by ϵ
 Lower ϵ values result in greater privacy but lower accuracy​
 Higher ϵ values result in greater accuracy with higher risk of individual identification
Absence of negative impact on groups based on:
Ethnicity
Gender
Age
Physical disability
Other sensitive features
Create models with parity constraints:
Algorithms:
 Exponentiated Gradient - A *reduction* technique that applies a
cost-minimization approach to learning the optimal trade-off of
overall predictive performance and fairness disparity (Binary
classification and regression)​
 Grid Search - A simplified version of the Exponentiated Gradient
algorithm that works efficiently with small numbers of constraints
(Binary classification and regression)​
 Threshold Optimizer - A *post-processing* technique that applies a
constraint to an existing classifier, transforming the prediction as
appropriate (Binary classification)​
Constraints:​
 Demographic parity: Minimize disparity in the selection rate across
sensitive feature groups.​
 True positive rate parity: Minimize disparity in true positive
rate across sensitive feature groups​
 False positive rate parity: Minimize disparity in false positive
rate across sensitive feature groups​
 Equalized odds: Minimize disparity in combined true positive
rate and false positive rate across sensitive feature groups​
 Error rate parity: Ensure that the error for each sensitive feature
group does not deviate from the overall error rate by more than a
specified amount​
 Bounded group loss: Restrict the loss for each sensitive feature
group in a regression model​
Packages that contribute to the explainability and transparency of a model:
Interpret-Community
InterpretML
Fairlearn
 Global Feature Importance
General importance of features for all test data
Indicates the relative influence of each feature
on the prediction tag
 Local Feature Importance
Importance of features for an individual
prediction
In the classification, this shows the relative
support for each possible class per feature
Follow-up /
Feedback
Problem
statement
Building
datasets
Algorithm
selection
Training process
Evaluation /
testing
process
Deployment
Is an algorithm an ethical
solution to the problem?
Is the training data representative of
different groups?
Are there biases in the labels or
features?
Do I need to modify the data to
migrate biases?
Is it necessary to include equity
constraints in the objective function?
Are there side effects among
users?
Is the model used in a
population for which it
has not been trained or
evaluated?
Has the model been evaluated using
relevant equity metrics?
Does the model encourage feedback
loops that can produce increasingly
unfair results?
Clarify what the intelligent system is going to do.
Clarify system performance.
Display relevant contextual information
Mitigate social bias.
Consider ignoring undesirable features.
Consider an efficient correction.
Clearly explain why the system made a certain
decision.
Remember recent interactions.
Learn from user behavior.
Update and adapt cautiously.
Encourage feedback.
Minimize unintentional bias
Ensuring AI transparency
Create opportunities
Protect data privacy and security
Benefit customers and markets
@darkicebeam


More Related Content

Similar to Towards Responsible AI - Global AI Student Conference 2022.pptx

Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift Conference
 
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationEthical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationTechCyber Vision
 
Select and Implement a Next Generation Endpoint Protection Solution
Select and Implement a Next Generation Endpoint Protection SolutionSelect and Implement a Next Generation Endpoint Protection Solution
Select and Implement a Next Generation Endpoint Protection SolutionInfo-Tech Research Group
 
AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AnandSRao1962
 
Role of AI Safety Institutes in Trustworthy AI.pdf
Role of AI Safety Institutes in Trustworthy AI.pdfRole of AI Safety Institutes in Trustworthy AI.pdf
Role of AI Safety Institutes in Trustworthy AI.pdfBob Marcus
 
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...Jon Mead
 
Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...
Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...
Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...wardell henley
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
 
Social Media And Project Management
Social Media And Project ManagementSocial Media And Project Management
Social Media And Project ManagementJerryGiltenane
 
Security results of_the_wqr_2015_16
Security results of_the_wqr_2015_16Security results of_the_wqr_2015_16
Security results of_the_wqr_2015_16Emily Brady
 
Ai in-business the-devo-hit-radar-perspective
Ai in-business the-devo-hit-radar-perspectiveAi in-business the-devo-hit-radar-perspective
Ai in-business the-devo-hit-radar-perspectiveCapgemini
 
AI model security.pdf
AI model security.pdfAI model security.pdf
AI model security.pdfStephenAmell4
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...Edge AI and Vision Alliance
 
Risk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docxRisk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docxSUBHI7
 
State of endpoint risk v3
State of endpoint risk v3State of endpoint risk v3
State of endpoint risk v3Lumension
 
State of endpoint risk v3
State of endpoint risk v3State of endpoint risk v3
State of endpoint risk v3Lumension
 
State of endpoint risk v3
State of endpoint risk v3State of endpoint risk v3
State of endpoint risk v3Lumension
 
eb-The-State-of-API-Security.pdf
eb-The-State-of-API-Security.pdfeb-The-State-of-API-Security.pdf
eb-The-State-of-API-Security.pdfSajid Ali
 
Infosys Amplifying Human Potential
Infosys Amplifying Human PotentialInfosys Amplifying Human Potential
Infosys Amplifying Human PotentialInfosys
 
UX for AI-Powered Products: Balancing Magic and User Trust
UX for AI-Powered Products: Balancing Magic and User Trust UX for AI-Powered Products: Balancing Magic and User Trust
UX for AI-Powered Products: Balancing Magic and User Trust ELEKS
 

Similar to Towards Responsible AI - Global AI Student Conference 2022.pptx (20)

Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
 
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and DiscriminationEthical Issues in Artificial Intelligence: Examining Bias and Discrimination
Ethical Issues in Artificial Intelligence: Examining Bias and Discrimination
 
Select and Implement a Next Generation Endpoint Protection Solution
Select and Implement a Next Generation Endpoint Protection SolutionSelect and Implement a Next Generation Endpoint Protection Solution
Select and Implement a Next Generation Endpoint Protection Solution
 
AI Developments and Trends (OECD)
AI Developments and Trends (OECD)AI Developments and Trends (OECD)
AI Developments and Trends (OECD)
 
Role of AI Safety Institutes in Trustworthy AI.pdf
Role of AI Safety Institutes in Trustworthy AI.pdfRole of AI Safety Institutes in Trustworthy AI.pdf
Role of AI Safety Institutes in Trustworthy AI.pdf
 
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...
 
Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...
Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...
Strategies improving-vulnerability-assessment-effectiveness-large-organizatio...
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
 
Social Media And Project Management
Social Media And Project ManagementSocial Media And Project Management
Social Media And Project Management
 
Security results of_the_wqr_2015_16
Security results of_the_wqr_2015_16Security results of_the_wqr_2015_16
Security results of_the_wqr_2015_16
 
Ai in-business the-devo-hit-radar-perspective
Ai in-business the-devo-hit-radar-perspectiveAi in-business the-devo-hit-radar-perspective
Ai in-business the-devo-hit-radar-perspective
 
AI model security.pdf
AI model security.pdfAI model security.pdf
AI model security.pdf
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
 
Risk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docxRisk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docx
 
State of endpoint risk v3
State of endpoint risk v3State of endpoint risk v3
State of endpoint risk v3
 
State of endpoint risk v3
State of endpoint risk v3State of endpoint risk v3
State of endpoint risk v3
 
State of endpoint risk v3
State of endpoint risk v3State of endpoint risk v3
State of endpoint risk v3
 
eb-The-State-of-API-Security.pdf
eb-The-State-of-API-Security.pdfeb-The-State-of-API-Security.pdf
eb-The-State-of-API-Security.pdf
 
Infosys Amplifying Human Potential
Infosys Amplifying Human PotentialInfosys Amplifying Human Potential
Infosys Amplifying Human Potential
 
UX for AI-Powered Products: Balancing Magic and User Trust
UX for AI-Powered Products: Balancing Magic and User Trust UX for AI-Powered Products: Balancing Magic and User Trust
UX for AI-Powered Products: Balancing Magic and User Trust
 

More from Luis775803

TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptx
TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptxTalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptx
TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptxLuis775803
 
IA Conversacional con Power Virtual Agents.pptx
IA Conversacional con Power Virtual Agents.pptxIA Conversacional con Power Virtual Agents.pptx
IA Conversacional con Power Virtual Agents.pptxLuis775803
 
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptx
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptxColombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptx
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptxLuis775803
 
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptx
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptxSTEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptx
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptxLuis775803
 
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptx
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptxStudent Summit - Conoce más sobre mi carrera en IA y Datos.pptx
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptxLuis775803
 
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptx
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptxGira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptx
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptxLuis775803
 
Build After Party Bolivia - Hugging Face on Azure.pptx
Build After Party Bolivia - Hugging Face on Azure.pptxBuild After Party Bolivia - Hugging Face on Azure.pptx
Build After Party Bolivia - Hugging Face on Azure.pptxLuis775803
 
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...Luis775803
 
Introduction to .NET MAUI.pdf
Introduction to .NET MAUI.pdfIntroduction to .NET MAUI.pdf
Introduction to .NET MAUI.pdfLuis775803
 
SISWeek Creando un sistema de reconocimiento facial con Face API.pptx
SISWeek Creando un sistema de reconocimiento facial con Face API.pptxSISWeek Creando un sistema de reconocimiento facial con Face API.pptx
SISWeek Creando un sistema de reconocimiento facial con Face API.pptxLuis775803
 
Azure Guatemala.pptx
Azure Guatemala.pptxAzure Guatemala.pptx
Azure Guatemala.pptxLuis775803
 
Conoce las novedades de .NET MAUI en .NET 7.pptx
Conoce las novedades de .NET MAUI en .NET 7.pptxConoce las novedades de .NET MAUI en .NET 7.pptx
Conoce las novedades de .NET MAUI en .NET 7.pptxLuis775803
 
GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxGAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxLuis775803
 
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptx
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptxPower BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptx
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptxLuis775803
 
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptx
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptxMes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptx
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptxLuis775803
 
GAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxGAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxLuis775803
 
Platzi Azure.pptx
Platzi Azure.pptxPlatzi Azure.pptx
Platzi Azure.pptxLuis775803
 
GAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxGAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxLuis775803
 
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptx
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptxVirtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptx
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptxLuis775803
 
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...Luis775803
 

More from Luis775803 (20)

TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptx
TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptxTalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptx
TalentLand - Entendiendo tus documentos con Azure Form Recognizer.pptx
 
IA Conversacional con Power Virtual Agents.pptx
IA Conversacional con Power Virtual Agents.pptxIA Conversacional con Power Virtual Agents.pptx
IA Conversacional con Power Virtual Agents.pptx
 
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptx
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptxColombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptx
Colombia Cloud Bootcamp - IA y Accesibilidad Pronunciation Assessment.pptx
 
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptx
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptxSTEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptx
STEMWeek - Entendiendo tus documentos con Azure Form Recognizer.pptx
 
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptx
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptxStudent Summit - Conoce más sobre mi carrera en IA y Datos.pptx
Student Summit - Conoce más sobre mi carrera en IA y Datos.pptx
 
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptx
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptxGira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptx
Gira Speaker Latam - IA y Accesibilidad con Pronunciation Assessment.pptx
 
Build After Party Bolivia - Hugging Face on Azure.pptx
Build After Party Bolivia - Hugging Face on Azure.pptxBuild After Party Bolivia - Hugging Face on Azure.pptx
Build After Party Bolivia - Hugging Face on Azure.pptx
 
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...
Microsoft Reactor - Creando un modelo de Regresión con Azure Machine Learnin...
 
Introduction to .NET MAUI.pdf
Introduction to .NET MAUI.pdfIntroduction to .NET MAUI.pdf
Introduction to .NET MAUI.pdf
 
SISWeek Creando un sistema de reconocimiento facial con Face API.pptx
SISWeek Creando un sistema de reconocimiento facial con Face API.pptxSISWeek Creando un sistema de reconocimiento facial con Face API.pptx
SISWeek Creando un sistema de reconocimiento facial con Face API.pptx
 
Azure Guatemala.pptx
Azure Guatemala.pptxAzure Guatemala.pptx
Azure Guatemala.pptx
 
Conoce las novedades de .NET MAUI en .NET 7.pptx
Conoce las novedades de .NET MAUI en .NET 7.pptxConoce las novedades de .NET MAUI en .NET 7.pptx
Conoce las novedades de .NET MAUI en .NET 7.pptx
 
GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxGAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Philippines - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
 
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptx
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptxPower BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptx
Power BI Summit 2023 - Embedding PowerBI reports in .NET MAUI mobile apps.pptx
 
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptx
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptxMes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptx
Mes de Datos Ciencia de Datos a otro nivel con Azure Machine Learning.pptx
 
GAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxGAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Germany - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
 
Platzi Azure.pptx
Platzi Azure.pptxPlatzi Azure.pptx
Platzi Azure.pptx
 
GAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptxGAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
GAIB Latam - Tailoring OpenAI’s GPT-3 to suit your specific needs.pptx
 
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptx
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptxVirtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptx
Virtual Azure Community Day - Workloads de búsqueda full-text Azure Search.pptx
 
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...
Global Azure 2022 en Español - Clasificacion de imagenes con Azure Machine L...
 

Recently uploaded

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 

Recently uploaded (20)

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 

Towards Responsible AI - Global AI Student Conference 2022.pptx

  • 2.
  • 3. 35 % 65 % Trust Trust in AI Don't trust in AI How confident are consumers with how organizations implement AI? 77 % 23 % Responsibility Yes No Should organizations be held accountable for AI misuse? Sources: Accenture 2022 Tech Vision Research https://www.accenture.com/dk-en/insights/technology/technology-trends-2022 Accenture 2019 Global Risk Study https://www.accenture.com/us-en/insights/financial-services/global-risk-study
  • 4.
  • 5. It is an approach to evaluating, developing and implementing AI systems in a safe, reliable and ethical manner, and making responsible decisions and actions. Generally speaking, Responsible AI is the practice of upholding the principles of AI when designing, building, and using artificial intelligence systems.
  • 6.
  • 7.
  • 8.
  • 9.
  • 11.
  • 12. Differential privacy adds noise so the maximum impact of an individual on the outcome of an aggregative analysis is at most epsilon (ϵ)​  The incremental privacy risk between opting out vs participation for any individual is governed by ϵ  Lower ϵ values result in greater privacy but lower accuracy​  Higher ϵ values result in greater accuracy with higher risk of individual identification
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18. Absence of negative impact on groups based on: Ethnicity Gender Age Physical disability Other sensitive features
  • 19.
  • 20.
  • 21.
  • 22.
  • 23. Create models with parity constraints: Algorithms:  Exponentiated Gradient - A *reduction* technique that applies a cost-minimization approach to learning the optimal trade-off of overall predictive performance and fairness disparity (Binary classification and regression)​  Grid Search - A simplified version of the Exponentiated Gradient algorithm that works efficiently with small numbers of constraints (Binary classification and regression)​  Threshold Optimizer - A *post-processing* technique that applies a constraint to an existing classifier, transforming the prediction as appropriate (Binary classification)​
  • 24. Constraints:​  Demographic parity: Minimize disparity in the selection rate across sensitive feature groups.​  True positive rate parity: Minimize disparity in true positive rate across sensitive feature groups​  False positive rate parity: Minimize disparity in false positive rate across sensitive feature groups​  Equalized odds: Minimize disparity in combined true positive rate and false positive rate across sensitive feature groups​  Error rate parity: Ensure that the error for each sensitive feature group does not deviate from the overall error rate by more than a specified amount​  Bounded group loss: Restrict the loss for each sensitive feature group in a regression model​
  • 25.
  • 26.
  • 27.
  • 28. Packages that contribute to the explainability and transparency of a model: Interpret-Community InterpretML Fairlearn
  • 29.  Global Feature Importance General importance of features for all test data Indicates the relative influence of each feature on the prediction tag  Local Feature Importance Importance of features for an individual prediction In the classification, this shows the relative support for each possible class per feature
  • 30.
  • 31.
  • 32. Follow-up / Feedback Problem statement Building datasets Algorithm selection Training process Evaluation / testing process Deployment Is an algorithm an ethical solution to the problem? Is the training data representative of different groups? Are there biases in the labels or features? Do I need to modify the data to migrate biases? Is it necessary to include equity constraints in the objective function? Are there side effects among users? Is the model used in a population for which it has not been trained or evaluated? Has the model been evaluated using relevant equity metrics? Does the model encourage feedback loops that can produce increasingly unfair results?
  • 33.
  • 34.
  • 35. Clarify what the intelligent system is going to do. Clarify system performance. Display relevant contextual information Mitigate social bias.
  • 36. Consider ignoring undesirable features. Consider an efficient correction. Clearly explain why the system made a certain decision.
  • 37. Remember recent interactions. Learn from user behavior. Update and adapt cautiously. Encourage feedback.
  • 38. Minimize unintentional bias Ensuring AI transparency Create opportunities Protect data privacy and security Benefit customers and markets
  • 39.
  • 40.

Editor's Notes

  1. When we talk about AI, we usually refer to a machine learning model that is used within a system to automate something. For example, a self-driving car can take images using sensors. A machine learning model can use these images to make predictions (for example, the object in the image is a tree). These predictions are used by the car to make decisions (for example, turn left to avoid the tree). We refer to this whole system as AI. When AI is developed, there are risks that it will be unfair or seen as a black box that makes decisions for humans.
  2. AI brings unprecedented opportunities to businesses, but it also comes with incredible responsibility.  Its direct impact on people's lives has raised considerable questions around AI ethics, data governance, trust and legality. In fact, Accenture's 2022 Tech Vision research found that only 35% of global consumers trust how organizations implement AI. And 77% think organizations should be held accountable for their misuse of AI. As organizations begin to expand their use of AI to capture business benefits, they need to consider regulations and the steps they need to take to make sure their organizations are compliant. That's where responsible AI comes into play where also data scientists and machine learning engineers have an ethical (and possibly legal) responsibility to create models that don't negatively affect individuals or groups of people.
  3. Responsible AI is the practice of designing, developing, and deploying AI with good intent to empower employees and businesses, and impact customers and society fairly, safely, and ethically, enabling organizations to build trust and scale AI more securely. They are the product of many decisions made by those who develop and implement them. From the purpose of the system to the way people interact with AI systems, responsible AI can help proactively guide decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency. Evaluating and researching ML models before their implementation remains at the core of reliable and responsible AI development.
  4. Microsoft has developed a Responsible AI Standard. It's a framework for building AI systems according to six key principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For Microsoft, these principles are the foundations of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in products and services that people use every day. Let’s talk about some of the principles
  5. AI systems like facial recognition or voice tagging can definitely be used to breach an individual's privacy and threaten security. How an individual's online footprint is used to track, deduce and influence someone's preferences or perspectives is a serious concern that needs to be addressed. The way in which "fake news" or "deep fakes" influence public opinion also represents a threat to individual or social security. AI systems are increasingly misused in this domain. There is a pertinent need to establish a framework that protects an individual's privacy and security. Privacy is any data that can identify an individual and/or their location, activities and interests. Such data is generally subject to strict privacy and compliance laws, for example GDPR in Europe. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data. It should require consumers to have adequate controls in choosing how their data is used.
  6. Data science projects, including machine learning projects, involve analysis of data; and often that data includes sensitive personal details that should be kept private.​ In practice, most reports that are published from the data include aggregations of the data, which you may think would provide some privacy – after all, the aggregated results do not reveal the individual data values.​ ​ However, consider a case where multiple analyses of the data result in reported aggregations that when combined, could be used to  work out information about individuals in the source dataset. In the example on the slide, 10 participants share data about their location and salary. The aggregated salary data tells us the average salary in Seattle; and the location data tells us that 10% of the study participants (in other words, a single person) is based in Seattle – so we can easily determine the specific salary of the Seattle-based participant.​ ​ Anyone reviewing both studies who happens to know a person from Seattle who participated, now knows that person's salary.​
  7. Differential privacy seeks to protect individual data values by adding statistical "noise" to the analysis process. The math involved in adding the noise is quite complex, but the principle is fairly intuitive – the noise ensures that data aggregations stay statistically consistent with the actual data values allowing for some random variation, but make it impossible to work out the individual values from the aggregated data. In addition, the noise is different for each analysis, so the results are non-deterministic – in other words, two analyses that perform the same aggregation may produce slightly different results.​
  8. SmartNoise is a project (co-developed by Microsoft) that contains components for building differentially private systems that are global.
  9. You can use SmartNoise to create an analysis in which noise is added to the source data. The underlying mathematics of how the noise is added can be quite complex, but SmartNoise takes care of most of the details for you Built-in support for training simple machine learning models like linear and logistic regression Compatible with open-source training libraries such TensorFlow Privacy
  10. Epsilon: The amount of variation caused by adding noise is configurable through a parameter called epsilon. This value governs the amount of additional risk that your personal data can be identified. The key thing is that it applies this privacy principle for every member in the data. A low epsilon value provides the most privacy, at the expense of less accuracy when aggregating the data. A higher epsilon value results in aggregations that are more true to the actual data distribution, but in which the individual contribution of a single individual to the aggregated value is less obscured by noise.​
  11. However, there are a few concepts it's useful to be aware of. Upper and lower bounds: Clamping is used to set upper and lower bounds on values for a variable. This is required to ensure that the noise generated by SmartNoise is consistent with the expected distribution of the original data. Sample size: To generate consistent differentially private data for some aggregations, SmartNoise needs to know the size of the data sample to be generated.
  12. Now let's compare that with a differentially private histogram of Age.
  13. It's common when analyzing data to examine the distribution of a variable using a histogram. For example, let's look at the true distribution of ages in the diabetes dataset. The histograms are similar enough to ensure that reports based on the differentially private data provide the same insights as reports from the raw data.
  14. Another common goal of analysis is to establish relationships between variables. SmartNoise provides a differentially private covariance function that can help with this. In this case, the covariance between Age and DisatolicBloodPressure is positive, indicating that older patients tend to have higher blood pressure.
  15. You can use the Fairlearn package to analyze a model and explore disparity in prediction performance for different subsets of data based on specific features, such as age. Train model After training a model, you can use the Fairlearn package to compare its behavior for different sensitive feature values. A mix of fairlearn and scikit-learn metric functions are used to calculate the performance values. Use scikit-learn metric functions to calculate overall accuracy, recall, and precision metrics. Use the fairlearn selection_rate function to return the selection rate (percentage of positive predictions) for the overall population. Use a MetricFrame to calculate selection rate, accuracy, recall, and precision for each age group in the Age sensitive feature. From these metrics, you should be able to discern that a larger proportion of the older patients are predicted to be diabetic. Accuracy should be more or less equal for the two groups, but a closer inspection of precision and recall indicates some disparity in how well the model predicts for each age group. The model does a better job of this for patients in the older age group than for younger patients.
  16. It's often easier to compare metrics visually. To do this, you'll use the Fairlearn fairness dashboard: When the widget is displayed, use the Get started link to start configuring your visualization. Select the sensitive features you want to compare (in this case, there's only one: Age). Select the model performance metric you want to compare (in this case, it's a binary classification model so the options are Accuracy, Balanced accuracy, Precision, and Recall). Start with Recall. Select the type of fairness comparison you want to view. Start with Demographic parity difference. The choice of parity constraint depends on the technique being used and the specific fairness criteria you want to apply. Constraints include:​ - Demographic parity: Use this constraint with any of the mitigation algorithms to minimize disparity in the selection rate across sensitive feature groups. For example, in a binary classification scenario, this constraint tries to ensure that an equal number of positive predictions are made in each group.​
  17. View the dashboard charts, which show: Selection rate - A comparison of the number of positive cases per subpopulation. False positive and false negative rates - how the selected performance metric compares for the subpopulations, including underprediction (false negatives) and overprediction (false positives). Edit the configuration to compare the predictions based on different performance and fairness metrics. The results show a much higher selection rate for patients over 50 than for younger patients. However, in reality, age is a genuine factor in diabetes, so you would expect more positive cases among older patients. If we base model performance on accuracy (in other words, the percentage of predictions the model gets right), then it seems to work more or less equally for both subpopulations. However, based on the precision and recall metrics, the model tends to perform better for patients who are over 50 years old.
  18. A common approach to mitigation is to use one of the algorithms and constraints to train multiple models, and then compare their performance, selection rate, and disparity metrics to find the optimal model for your needs. Often, the choice of model involves a trade-off between raw predictive performance and fairness. Generally, fairness is measured by reduction in disparity of feature selection or by a reduction in disparity of performance metric. ​ To train the models for comparison, you use mitigation algorithms to create alternative models that apply parity constraints  to produce comparable metrics across sensitive feature groups. Some common algorithms used to optimize models for fairness. GridSearch trains multiple models in an attempt to minimize the disparity of predictive performance for the sensitive features in the dataset (in this case, the age groups) - Exponentiated Gradient - A *reduction* technique that applies a cost-minimization approach to learning the optimal trade-off of overall predictive performance and fairness disparity  (Binary classification and regression)​ - Grid Search - A simplified version of the Exponentiated Gradient algorithm that works efficiently with small numbers of constraints (Binary classification and regression)​ - Threshold Optimizer - A *post-processing* technique that applies a constraint to an existing classifier, transforming the prediction as appropriate (Binary classification)​
  19. ​The choice of parity constraint depends on the technique being used and the specific fairness criteria you want to apply. The EqualizedOdds parity constraint tries to ensure that models that exhibit similar true and false positive rates for each sensitive feature grouping.
  20. The models are shown on a scatter plot. You can compare the models by measuring the disparity in predictions (in other words, the selection rate) or the disparity in the selected performance metric (in this case, recall). In this scenario, we expect disparity in selection rates (because we know that age is a factor in diabetes, with more positive cases in the older age group). What we're interested in is the disparity in predictive performance, so select the option to measure Disparity in recall. The chart shows clusters of models with the overall recall metric on the X axis, and the disparity in recall on the Y axis. Therefore, the ideal model (with high recall and low disparity) would be at the bottom right corner of the plot. You can choose the right balance of predictive performance and fairness for your particular needs, and select an appropriate model to see its details. An important point to reinforce is that applying fairness mitigation to a model is a trade-off between overall predictive performance and disparity across sensitive feature groups - generally you must sacrifice some overall predictive performance to ensure that the model predicts fairly for all segments of the population.
  21. The chart shows clusters of models with the overall recall metric on the X axis, and the disparity in recall on the Y axis. Therefore, the ideal model (with high recall and low disparity) would be at the bottom right corner of the plot. You can choose the right balance of predictive performance and fairness for your particular needs, and select an appropriate model to see its details. An important point to reinforce is that applying fairness mitigation to a model is a trade-off between overall predictive performance and disparity across sensitive feature groups - generally you must sacrifice some overall predictive performance to ensure that the model predicts fairly for all segments of the population.
  22. It is important to be able to understand how machine learning models make predictions; and be able to explain the justification for decisions made by the system by identifying and mitigating biases. Model interpretability has become a key element in helping model predictions to be explainable, not seen as a black box making random decisions. Transparency then makes it possible to explain why a model makes the predictions it does. What characteristics affect the behavior of a model? Why was a specific customer's loan application approved or denied?
  23. Yo Model explainers use statistical techniques to calculate the *importance of features*. This allows you to quantify the relative influence that each characteristic of the training dataset has on prediction. They evaluate a feature case test dataset and the labels that the model predicts for them. Global feature importance The importance of the global feature quantifies the relative importance of each feature in the test dataset as a whole and how each feature influences the prediction. For example, a binary classification model to predict loan default risk could be trained from characteristics such as loan amount, income, marital status, and age to predict a label of 1 for loans likely to be repaid and 0 for loans that have a significant risk of default (and, therefore, they should not be approved). An explainer could then use a sufficiently representative test dataset to produce the following importance values of global characteristics: - Income: 0.98 - Loan amount: 0.67 - Age: 0.54 - Marital status: 0.32 It is clear from these values, that income is the most important characteristic in predicting whether or not a borrower will default on a loan, followed by the loan amount, then age and finally marital status. Local feature importance It measures the influence of each feature value for a specific individual prediction. For example, suppose Sam applies for a loan that the model approves. You can use an explanatory for Sam's application to determine which factors influenced the prediction. You might get a result like the one shown in the second image that indicates the amount of support for each class based on the value of the entity. Since this is a binary classification model, there are only two possible classes (0 and 1). In Sam's case, general support for class 0 is -1.4, and support for class 1 is correspondingly 1.4, and the loan is approved. The most important feature for a class 1 prediction is the loan amount, followed by income: these are the opposite order of its importance values of global characteristics (indicating that income is the most important factor for the data sample as a whole). There could be multiple reasons why local importance for an individual prediction varies from global importance to the overall dataset; for example, Sam might have a lower than average income, but the loan amount in this case might be unusually small.
  24. **Interpret-Community** package is a wrapper around a collection of *explainers* based on proven and emerging model interpretation algorithms, such as Shapely Additive Explanations (SHAP) (https://github.com/slundberg/shap) and Local Interpretable Model-agnostic Explanations (LIME) (https://github.com/marcotcr/lime).
  25. AI systems must be secure in order to be trusted. It is important that a system works as originally designed and responds safely to new situations. Their inherent resilience must resist intentional or unintentional manipulation. Rigorous testing and validation for operating conditions must be established to ensure that the system responds safely to extreme cases.   The performance of an AI system can degrade over time, so a robust model monitoring and tracking process must be established to reactively and proactively measure model performance and retrain it, as needed, to modernize it.  
  26. The world around us is diverse. There are people from all walks of life. People with disabilities, nonprofits, government agencies need AI systems as much as any other person or company. The AI system must be inclusive and in tune with the needs of this diverse ecosystem. When AI systems think about inclusion, the following questions must be answered: Was the AI system developed to ensure that it includes different categories of individuals or organizations? Are there any categories of data that need to be handled exceptionally to ensure they are included? Does the expertise provided by the AI system exclude any specific type of categories? If so, is there anything that can be done about it? Inclusive design practices can help developers understand and address potential barriers that might unintentionally exclude people. Whenever possible, speech-to-text, text-to-speech, and visual recognition technology should be used to train people with hearing, vision, and other disabilities.
  27. Accountability is an essential pillar of responsible AI. The people who design and implement the AI system need to be held accountable for their actions and decisions, especially as we move toward more autonomous systems. Organizations should consider establishing an internal review body that provides oversight, information, and guidance on the development and implementation of AI systems. While this guide may vary by company and region, it should reflect an organization's AI journey. Imagine that the algorithm of an autonomous car causes an accident. Who is responsible for this? The driver, the car owner, the creator of AI? 
  28. If the AI system uses or generates metrics, it is important to show them all and how they are tracked. It helps users understand that AI will not be completely accurate and sets expectations about when the AI system might make mistakes. It provides visual information related to the user's current context and environment, such as nearby hotels and return details near the destination and destination date. Make sure language and behavior don't introduce unwanted stereotypes or biases. For example, an autocomplete function must recognize multiple genres.
  29. It provides an easy mechanism to ignore or dismiss undesirable features or services. It provides an intuitive way to make it easier to edit, refine, or retrieve models. Optimizes explainable AI to provide insights into AI system decisions.
  30. Keep a history of interactions for future reference. Personalize the interaction based on user behavior. Limit disruptive changes and update based on the user's profile. Collect user feedback from their interactions with the AI system.
  31. It ensures that models are as unbiased and representative as possible. Transparent and explainable AI builds trust among users. Create opportunities without stifling innovation. It ensures that personal and sensitive data is never used unethically. Creating an ethical basis for AI establishes systems that benefit shareholders, employees and society at large.
  32. By way of conclusion, we recall that the principles that are recommended to follow to develop a responsible AI are Reliability: We need to make sure that the systems we develop are consistent with the ideas, values, and design principles so that they don't create any harm in the world. Privacy: Complexity is part of AI systems, more data is needed and our software must ensure that that data is protected, that it is not leaked or disclosed. Inclusiveness: Empower and engage people by making sure no one is left out. Consider inclusion and diversity in your models so that the entire spectrum of communities is covered. Transparency: Transparency means that people creating AI systems must be open about how and are using AI and also open about the limitations of their systems. Transparency also means interpretability, which refers to the fact that people must be able to understand the behavior of AI systems.  As a result, transparency helps gain more trust from users. Accountability: Define best practices and processes that AI professionals can follow, such as commitment to equity, to consider at every step of the AI lifecycle.