Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Explainable Artificial Intelligence (XAI)
Presented at Lightning Talk session at ICACCI'18 on 20th September 208
An Explainable AI (XAI) or Transparent AI is an artificial intelligence (AI) whose actions can be easily understood by humans. It contrasts with the concept of the "black box" in machine learning, meaning the "interpretability" of the workings of complex algorithms, where even their designers cannot explain why the AI arrived at a specific decision.
https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckSlideTeam
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck is loaded with easy-to-follow content, and intuitive design. Introduce the types and levels of artificial intelligence using the highly-effective visuals featured in this PPT slide deck. Showcase the AI-subfield of machine learning, as well as deep learning through our comprehensive PowerPoint theme. Represent the differences, and interrelationship between AI, ML, and DL. Elaborate on the scope and use case of machine intelligence in healthcare, HR, banking, supply chain, or any other industry. Take advantage of the infographic-style layout to describe why AI is flourishing in today’s day and age. Elucidate AI trends such as robotic process automation, advanced cybersecurity, AI-powered chatbots, and more. Cover all the essentials of machine learning and deep learning with the help of this PPT slideshow. Outline the application, algorithms, use cases, significance, and selection criteria for machine learning. Highlight the deep learning process, types, limitations, and significance. Describe reinforcement training, neural network classifications, and a lot more. Hit download and begin personalization. Our AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck are topically designed to provide an attractive backdrop to any subject. Use them to look like a presentation pro. https://bit.ly/3ngJCKf
Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
해당 자료는 풀잎스쿨 18기 중 "설명가능한 인공지능 기획!" 진행 중 Counterfactual Explanation 세션에 대해서 정리한 자료입니다.
논문, Youtube 및 하기 자료를 바탕으로 정리되었습니다.
https://christophm.github.io/interpretable-ml-book/
Explainable AI: Building trustworthy AI models? Raheel Ahmad
Building trustworthy, transparent and unbiased machine learning models?
Get started with explainX that brings state-of-the-art explainability techniques under one roof accessible via one-line of code.
Learn the major modules within the explainX explainable AI and model interpretability framework.
These slides are taken from Raheel's presentation at the UnpackAI's forum on Data Ethics in AI.
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
If Artificial Intelligence (AI) is a black-box, how can a human comprehend and trust the results of Machine Learning (ML) alogrithms? Explainable AI (XAI) tries to shed light into that AI black-box so humans can trust what is going on. Our speaker Meg Dickey-Kurdziolek is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. Recording of the presentation: https://youtu.be/6N2DNN_HDWU
Explainability for Natural Language ProcessingYunyao Li
NOTE: Please check out the final version here with small but important updates and links to downloadable version and recording: https://www.slideshare.net/YunyaoLi/explainability-for-natural-language-processing-249992241
Updated version on our popular tutorial on "Explainability for Natural Language Processing" as a tutorial at KDD'2021.
Title: Explainability for Natural Language Processing
@article{kdd2021xaitutorial,
title={Explainability for Natural Language Processing},
author= {Marina Danilevsky, Dhanorkar, Shipi and Li, Yunyao and Lucian Popa and Kun Qian and Anbang Xu},
journal={KDD},
year={2021}
}
Presenter: Marina Danilevsky, Dhanorkar, Shipi and Li, Yunyao and Lucian Popa and Kun Qian and Anbang Xu
Website: http://xainlp.github.io/
Abstract:
This lecture-style tutorial, which mixes in an interactive literature browsing component, is intended for the many researchers and practitioners working with text data and on applications of natural language processing (NLP) in data science and knowledge discovery. The focus of the tutorial is on the issues of transparency and interpretability as they relate to building models for text and their applications to knowledge discovery. As black-box models have gained popularity for a broad range of tasks in recent years, both the research and industry communities have begun developing new techniques to render them more transparent and interpretable.Reporting from an interdisciplinary team of social science, human-computer interaction (HCI), and NLP/knowledge management researchers, our tutorial has two components: an introduction to explainable AI (XAI) in the NLP domain and a review of the state-of-the-art research; and findings from a qualitative interview study of individuals working on real-world NLP projects as they are applied to various knowledge extraction and discovery at a large, multinational technology and consulting corporation. The first component will introduce core concepts related to explainability inNLP. Then, we will discuss explainability for NLP tasks and reporton a systematic literature review of the state-of-the-art literaturein AI, NLP and HCI conferences. The second component reports on our qualitative interview study, which identifies practical challenges and concerns that arise in real-world development projects that require the modeling and understanding of text data.
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Explainable Artificial Intelligence (XAI)
Presented at Lightning Talk session at ICACCI'18 on 20th September 208
An Explainable AI (XAI) or Transparent AI is an artificial intelligence (AI) whose actions can be easily understood by humans. It contrasts with the concept of the "black box" in machine learning, meaning the "interpretability" of the workings of complex algorithms, where even their designers cannot explain why the AI arrived at a specific decision.
https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckSlideTeam
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck is loaded with easy-to-follow content, and intuitive design. Introduce the types and levels of artificial intelligence using the highly-effective visuals featured in this PPT slide deck. Showcase the AI-subfield of machine learning, as well as deep learning through our comprehensive PowerPoint theme. Represent the differences, and interrelationship between AI, ML, and DL. Elaborate on the scope and use case of machine intelligence in healthcare, HR, banking, supply chain, or any other industry. Take advantage of the infographic-style layout to describe why AI is flourishing in today’s day and age. Elucidate AI trends such as robotic process automation, advanced cybersecurity, AI-powered chatbots, and more. Cover all the essentials of machine learning and deep learning with the help of this PPT slideshow. Outline the application, algorithms, use cases, significance, and selection criteria for machine learning. Highlight the deep learning process, types, limitations, and significance. Describe reinforcement training, neural network classifications, and a lot more. Hit download and begin personalization. Our AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck are topically designed to provide an attractive backdrop to any subject. Use them to look like a presentation pro. https://bit.ly/3ngJCKf
Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
해당 자료는 풀잎스쿨 18기 중 "설명가능한 인공지능 기획!" 진행 중 Counterfactual Explanation 세션에 대해서 정리한 자료입니다.
논문, Youtube 및 하기 자료를 바탕으로 정리되었습니다.
https://christophm.github.io/interpretable-ml-book/
Explainable AI: Building trustworthy AI models? Raheel Ahmad
Building trustworthy, transparent and unbiased machine learning models?
Get started with explainX that brings state-of-the-art explainability techniques under one roof accessible via one-line of code.
Learn the major modules within the explainX explainable AI and model interpretability framework.
These slides are taken from Raheel's presentation at the UnpackAI's forum on Data Ethics in AI.
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
If Artificial Intelligence (AI) is a black-box, how can a human comprehend and trust the results of Machine Learning (ML) alogrithms? Explainable AI (XAI) tries to shed light into that AI black-box so humans can trust what is going on. Our speaker Meg Dickey-Kurdziolek is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. Recording of the presentation: https://youtu.be/6N2DNN_HDWU
Explainability for Natural Language ProcessingYunyao Li
NOTE: Please check out the final version here with small but important updates and links to downloadable version and recording: https://www.slideshare.net/YunyaoLi/explainability-for-natural-language-processing-249992241
Updated version on our popular tutorial on "Explainability for Natural Language Processing" as a tutorial at KDD'2021.
Title: Explainability for Natural Language Processing
@article{kdd2021xaitutorial,
title={Explainability for Natural Language Processing},
author= {Marina Danilevsky, Dhanorkar, Shipi and Li, Yunyao and Lucian Popa and Kun Qian and Anbang Xu},
journal={KDD},
year={2021}
}
Presenter: Marina Danilevsky, Dhanorkar, Shipi and Li, Yunyao and Lucian Popa and Kun Qian and Anbang Xu
Website: http://xainlp.github.io/
Abstract:
This lecture-style tutorial, which mixes in an interactive literature browsing component, is intended for the many researchers and practitioners working with text data and on applications of natural language processing (NLP) in data science and knowledge discovery. The focus of the tutorial is on the issues of transparency and interpretability as they relate to building models for text and their applications to knowledge discovery. As black-box models have gained popularity for a broad range of tasks in recent years, both the research and industry communities have begun developing new techniques to render them more transparent and interpretable.Reporting from an interdisciplinary team of social science, human-computer interaction (HCI), and NLP/knowledge management researchers, our tutorial has two components: an introduction to explainable AI (XAI) in the NLP domain and a review of the state-of-the-art research; and findings from a qualitative interview study of individuals working on real-world NLP projects as they are applied to various knowledge extraction and discovery at a large, multinational technology and consulting corporation. The first component will introduce core concepts related to explainability inNLP. Then, we will discuss explainability for NLP tasks and reporton a systematic literature review of the state-of-the-art literaturein AI, NLP and HCI conferences. The second component reports on our qualitative interview study, which identifies practical challenges and concerns that arise in real-world development projects that require the modeling and understanding of text data.
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Explainability for Natural Language ProcessingYunyao Li
Final deck for our popular tutorial on "Explainability for Natural Language Processing" at KDD'2021. See links below for downloadable version (with higher resolution) and recording of the live tutorial.
Title: Explainability for Natural Language Processing
Presenter: Marina Danilevsky, Shipi Dhanorkar, Yunyao Li and Lucian Popa and Kun Qian and Anbang Xu
Website: http://xainlp.github.io/
Recording: https://www.youtube.com/watch?v=PvKOSYGclPk&t=2s
Downloadable version with higher resolution: https://drive.google.com/file/d/1_gt_cS9nP9rcZOn4dcmxc2CErxrHW9CU/view?usp=sharing
@article{kdd2021xaitutorial,
title={Explainability for Natural Language Processing},
author= {Marina Danilevsky, Shipi Dhanorkar and Yunyao Li and Lucian Popa and Kun Qian and Anbang Xu},
journal={KDD},
year={2021}
}
Abstract:
This lecture-style tutorial, which mixes in an interactive literature browsing component, is intended for the many researchers and practitioners working with text data and on applications of natural language processing (NLP) in data science and knowledge discovery. The focus of the tutorial is on the issues of transparency and interpretability as they relate to building models for text and their applications to knowledge discovery. As black-box models have gained popularity for a broad range of tasks in recent years, both the research and industry communities have begun developing new techniques to render them more transparent and interpretable.Reporting from an interdisciplinary team of social science, human-computer interaction (HCI), and NLP/knowledge management researchers, our tutorial has two components: an introduction to explainable AI (XAI) in the NLP domain and a review of the state-of-the-art research; and findings from a qualitative interview study of individuals working on real-world NLP projects as they are applied to various knowledge extraction and discovery at a large, multinational technology and consulting corporation. The first component will introduce core concepts related to explainability inNLP. Then, we will discuss explainability for NLP tasks and reporton a systematic literature review of the state-of-the-art literaturein AI, NLP and HCI conferences. The second component reports on our qualitative interview study, which identifies practical challenges and concerns that arise in real-world development projects that require the modeling and understanding of text data.
Explainability for Natural Language ProcessingYunyao Li
Tutorial at AACL'2020 (http://www.aacl2020.org/program/tutorials/#t4-explainability-for-natural-language-processing).
More recent version: https://www.slideshare.net/YunyaoLi/explainability-for-natural-language-processing-249912819
Title: Explainability for Natural Language Processing
@article{aacl2020xaitutorial,
title={Explainability for Natural Language Processing},
author= {Dhanorkar, Shipi and Li, Yunyao and Popa, Lucian and Qian, Kun and Wolf, Christine T and Xu, Anbang},
journal={AACL-IJCNLP 2020},
year={2020}
Presenter: Shipi Dhanorkar, Christine Wolf, Kun Qian, Anbang Xu, Lucian Popa and Yunyao Li
Video: https://www.youtube.com/watch?v=3tnrGe_JA0s&feature=youtu.be
Abstract:
We propose a cutting-edge tutorial that investigates the issues of transparency and interpretability as they relate to NLP. Both the research community and industry have been developing new techniques to render black-box NLP models more transparent and interpretable. Reporting from an interdisciplinary team of social science, human-computer interaction (HCI), and NLP researchers, our tutorial has two components: an introduction to explainable AI (XAI) and a review of the state-of-the-art for explainability research in NLP; and findings from a qualitative interview study of individuals working on real-world NLP projects at a large, multinational technology and consulting corporation. The first component will introduce core concepts related to explainability in NLP. Then, we will discuss explainability for NLP tasks and report on a systematic literature review of the state-of-the-art literature in AI, NLP, and HCI conferences. The second component reports on our qualitative interview study which identifies practical challenges and concerns that arise in real-world development projects which include NLP.
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...Raheel Ahmad
This presentation is from the Federated & Distributed Machine Learning Conference. This talk focuses on why we need explainable AI and how can we build models that are trustworthy, transparency and unbiased.
Deep Learning for Recommendations: Fundamentals and Advances
In this part, we focus on five of the most crucial dimensions in achieving trustworthy Recommender Systems. They are Privacy, Safety & Robustness, Explainability, Non-discrimination & Fairness, Accountability & Auditability.
Tutorial Website/slides: https://advanced-recommender-systems.github.io/ijcai2021-tutorial/
https://youtu.be/ukZGTsRrziE
"I don't trust AI": the role of explainability in responsible AIErika Agostinelli
This Tech talk was part of the Women in Data Science 2021 Bristol Event. An Introduction to Explainable AI: what to consider when developing an explainable strategy, what are the state of the art techniques and open-source tools used in the field, with concrete examples. To see the recording: https://www.crowdcast.io/e/o4gjxatp
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
PwC's recently released Responsible AI Diagnostic surveyed around 250 senior business executives from May to June 2019. The survey says that 84% of CEOs agree that AI-based decisions need to be explainable in order to be trusted. In the past few years, Deep learning has shown remarkable results in various applications, which makes it one of the first choices for many AI use cases. However, deep learning models are hard to explain, and since the majority of CEOs expect AI solutions to be explainable, deep learning has a serious challenge. Daniel Kahneman, in his book thinking fast and slow, presented two different systems the human brain uses to form thoughts and decisions: System 1: fast, intuitive and hard to explain System 2: slow, conscious and easy to explain In this talk I will present: A) PwC Responsible AI Survey B) A proposed deep learning framework that mimics the two systems of thinking C) The recent advances in the neural symbolic learning field.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Slides to a talk from @Chris_Betz, (https://data42.de) on AI, artificial intelligence, machine learning. What's driving AI hype and what's behind it. Understand general concepts and dig deep into explainability, debugging, verification, testing of machine learning solutions.
AI Foundations Course Module 1 - An AI Transformation JourneySri Ambati
The chances of successfully implementing AI strategies within an organization significantly improve when you can recognize where your organization is on the maturity scale. Over this course, you will learn the keys to unlocking value with AI which include asking the right questions about the problems you are solving and ensuring you have the right cross-section of talent, tools, and resources. By the end of this module, you should be able to recognize where your organization is on the AI transformation spectrum and identify some strategies that can get you to the next stage in your journey.
To find additional videos on AI courses, earn badges, join the courses at H2O.ai Learning Center: https://training.h2o.ai/products/ai-foundations-course
To find the Youtube video about this presentation: https://youtu.be/PJgr2epM6qs
Speakers:
Chemere Davis (H2O.ai - Senior Data Scientist Training Specialist)
Ingrid Burton (H2O.ai - CMO)
Transform Your Customer Experience with Modern Information ManagementNuxeo
According to new Nuxeo research, UK Financial Services firms are storing critical customer information and content - on average - across nine different systems. Additionally, 80% of respondents indicated that their information systems also aren’t integrated.
Not only does this have a significant impact on operational efficiency, it is also negatively impacting customers’ experiences as well. And with the specter of GDPR and other consumer privacy legislation looming, many organisations are finding it impossible to respond to these regulations and resulting customer inquiries.
In this webinar, we will share real-life examples of how leading financial services organisations - like ABN Amro, one of the Netherlands largest and most progressive banks - have modernised their legacy information systems to use data and content in new and innovative ways to deliver more value to their customers, reduce technology costs, and transform their businesses.
What you’ll learn:
- Why current information management strategies make financial services less competitive
- How to bridge information silos to enhance your customer experience
- Methods to improve decision-making with intelligent information
- How other top banks and insurers have already modernised their information management systems and what benefits they are experiencing
- How AI can provide greater automation and insight into financial services processes
- And, new ways to better comply with regulations
Our panelist will discuss common information management challenges that legacy systems pose, and how to intelligently modernise existing systems to support digital transformation, take advantage of new cloud technologies, and gain a competitive advantage in today’s financial services market.
EDR-8202: Statistics II
Week 2 Assignment Worksheet: Conduct Hypothesis Testing
You will now have the opportunity to become re-acquainted with hypothesis testing concepts, and the use of SPSS to review some of the basic statistical concepts learned and to enhance APA format when presenting statistical results.
The following small data set is from a study conducted within a single middle school. Fundamentally, this study is a comparison of the differences between male and female teachers in personal Confidence Scores and was conducted to determine if a relationship exists between the number of Years of Experience and Confidence Scores. The researcher wants to examine if years of experience could be used as a predictor of confidence scores and if gender could also impact confidence scores.
Sex
Years of Experience
Confidence Scores
Male
15
110
Male
3
117
Female
12
118
Male
8
120
Female
23
104
Female
9
100
Male
37
107
Male
14
115
Male
10
114
Female
4
115
Female
11
115
Male
1
100
Female
3
117
Female
7
115
Male
2
103
Female
21
125
Male
28
115
Female
9
115
Male
5
110
Female
3
110
Using scholarly writing and proper APA format, complete, and then submit the following:
I. Please answer the following questions using the readings assigned for this week.
1. What is hypothesis testing?
2. Discuss the following statement: “The sample must be representative of the population.” In your discussion, please include the concepts of generalizability and sampling procedures.
3. In hypothesis testing, there are some errors that could be committed if the wrong decision is made regarding the null hypothesis. These errors are Type I and Type II errors. Please describe each one of them, and provide an example when you are committing a Type I and Type II error.
4. Discuss each of the three main assumptions for parametric statistics. Be sure your discussion includes the variables that will be tested for the assumptions, the independent or dependent variable.
a. Normality
b. Homogeneity of variance (please provide an example of an analysis where this assumption is important)
c. Independence
II.Please open the file used in assignment 1 (Confidence scores and years of experience), or type the data provided in the table above in SPSS. Once the dataset is open or the data is typed, proceed to answer questions 4 and 5.
5. Conduct the appropriate test (Kolmogorov-Smirnov or Shapiro-Wilk’s) for normality, and provide the results, and a brief discussion of the results below (test used, test results, p-value, etc.). Be sure to discuss if the assumption of normality was met or violated.
6. Conduct the test homogeneity of variance, and provide the results, and a brief discussion of the results below (test used, test results, p-value, etc.). Be sure to discuss if the assumption of normality was met or violated.
iii
Preface xxv
About the Authors xxxiv
PART I Introduction to Analytics and AI 1
Chapter 1 Overview of Business Intelligence, Analytics,
.
EDR-8202: Statistics II
Week 2 Assignment Worksheet: Conduct Hypothesis Testing
You will now have the opportunity to become re-acquainted with hypothesis testing concepts, and the use of SPSS to review some of the basic statistical concepts learned and to enhance APA format when presenting statistical results.
The following small data set is from a study conducted within a single middle school. Fundamentally, this study is a comparison of the differences between male and female teachers in personal Confidence Scores and was conducted to determine if a relationship exists between the number of Years of Experience and Confidence Scores. The researcher wants to examine if years of experience could be used as a predictor of confidence scores and if gender could also impact confidence scores.
Sex
Years of Experience
Confidence Scores
Male
15
110
Male
3
117
Female
12
118
Male
8
120
Female
23
104
Female
9
100
Male
37
107
Male
14
115
Male
10
114
Female
4
115
Female
11
115
Male
1
100
Female
3
117
Female
7
115
Male
2
103
Female
21
125
Male
28
115
Female
9
115
Male
5
110
Female
3
110
Using scholarly writing and proper APA format, complete, and then submit the following:
I. Please answer the following questions using the readings assigned for this week.
1. What is hypothesis testing?
2. Discuss the following statement: “The sample must be representative of the population.” In your discussion, please include the concepts of generalizability and sampling procedures.
3. In hypothesis testing, there are some errors that could be committed if the wrong decision is made regarding the null hypothesis. These errors are Type I and Type II errors. Please describe each one of them, and provide an example when you are committing a Type I and Type II error.
4. Discuss each of the three main assumptions for parametric statistics. Be sure your discussion includes the variables that will be tested for the assumptions, the independent or dependent variable.
a. Normality
b. Homogeneity of variance (please provide an example of an analysis where this assumption is important)
c. Independence
II.Please open the file used in assignment 1 (Confidence scores and years of experience), or type the data provided in the table above in SPSS. Once the dataset is open or the data is typed, proceed to answer questions 4 and 5.
5. Conduct the appropriate test (Kolmogorov-Smirnov or Shapiro-Wilk’s) for normality, and provide the results, and a brief discussion of the results below (test used, test results, p-value, etc.). Be sure to discuss if the assumption of normality was met or violated.
6. Conduct the test homogeneity of variance, and provide the results, and a brief discussion of the results below (test used, test results, p-value, etc.). Be sure to discuss if the assumption of normality was met or violated.
iii
Preface xxv
About the Authors xxxiv
PART I Introduction to Analytics and AI 1
Chapter 1 Overview of Business Intelligence, Analytics,
.
Similar to Explainable AI in Industry (AAAI 2020 Tutorial) (20)
Amazon SageMaker Clarify (https://aws.amazon.com/sagemaker/clarify/) provides machine learning developers with greater visibility into their training data and models so they can identify and limit bias and explain predictions. SageMaker Clarify detects potential bias during data preparation, after model training, and in your deployed model by examining attributes you specify. For instance, you can check for bias related to age in your initial dataset or in your trained model and receive a detailed report that quantifies different types of possible bias. SageMaker Clarify also includes feature importance graphs that help you explain model predictions and produces reports which can be used to support internal presentations or to identify issues with your model that you can take steps to correct.
For more information on Amazon SageMaker Clarify, please refer these links: (1) https://aws.amazon.com/sagemaker/clarify (2) https://aws.amazon.com/blogs/aws/new-amazon-sagemaker-clarify-detects-bias-and-increases-the-transparency-of-machine-learning-models (3) https://github.com/aws/amazon-sagemaker-clarify (4) Discussion and demo: https://youtu.be/cQo2ew0DQw0
Acknowledgments: Amazon SageMaker Clarify core team, Amazon AWS AI team, and partners across Amazon
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we protect the privacy of users when building large-scale AI based systems? How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we present open problems and research directions for the data mining / machine learning community.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple's differential privacy deployment for iOS / macOS, Google's RAPPOR, LinkedIn Salary, and Microsoft's differential privacy deployment for collecting Windows telemetry. We will conclude with open problems and challenges for the data mining / machine learning community, based on our experiences in industry.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Krishnaram Kenthapadi
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple's differential privacy deployment for iOS / macOS, Google's RAPPOR, LinkedIn Salary, and Microsoft's differential privacy deployment for collecting Windows telemetry. We will conclude with open problems and challenges for the data mining / machine learning community, based on our experiences in industry.
How do we protect privacy of users in large-scale systems? How do we ensure fairness and transparency when developing machine learned models? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical and legal challenges encountered by researchers and practitioners alike. In this talk (presented at QConSF 2018), we first present an overview of privacy breaches as well as algorithmic bias / discrimination issues observed in the Internet industry over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving privacy and fairness in data-driven systems. We motivate the need for adopting a "privacy and fairness by design" approach when developing data-driven AI/ML models and systems for different consumer and enterprise applications. We also focus on the application of privacy-preserving data mining and fairness-aware machine learning techniques in practice, by presenting case studies spanning different LinkedIn applications, and conclude with the key takeaways and open challenges.
This talk provides an overview of privacy-preserving analytics and data mining systems at LinkedIn, highlighting the practical challenges/requirements, techniques, and lessons learned from deployment. The first part presents a framework to compute robust, privacy-preserving analytics, while the second part focuses on the privacy challenges/design for a large crowdsourced system (LinkedIn Salary). This presentation is an expanded version of the talk given at the Differential Privacy Deployed workshop, co-organized by Cynthia Dwork and held at Harvard / American Academy of Sciences in September, 2018.
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Krishnaram Kenthapadi
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple’s differential privacy deployment for iOS, Google’s RAPPOR, and LinkedIn Salary. We will also discuss various open source as well as commercial privacy tools, and conclude with open problems and challenges for data mining / machine learning community.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
ER(Entity Relationship) Diagram for online shopping - TAEHimani415946
https://bit.ly/3KACoyV
The ER diagram for the project is the foundation for the building of the database of the project. The properties, datatypes, and attributes are defined by the ER diagram.
3. Agenda
● Part I: Introduction and Motivation
○ Motivation, Definitions, Properties, Evaluation
○ Challenges for Explainable AI @ Scale
● Part II: Explanation in AI (not only Machine Learning!)
○ From Machine Learning to Knowledge Representation and Reasoning and Beyond
● Part III: Explainable Machine Learning (from a Machine Learning Perspective)
● Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective)
● Part V: Case Studies from Industry
○ Applications, Lessons Learned, and Research Challenges
3
13. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, Noemie Elhadad: Intelligible Models
for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. KDD 2015: 1721-1730
Patricia Hannon ,https://med.stanford.edu/news/all-news/2018/03/researchers-say-use-of-ai-in-medicine-
raises-ethical-questions.html
▌Healthcare
• Applying ML methods in medical care
is problematic.
• AI as 3rd-party actor in physician-
patient relationship
• Responsibility, confidentiality?
• Learning must be done with available
data.
Cannot randomize cares given to
patients!
• Must validate models before use.
… but not only Critical Systems (3)
15. Internal Audit, Regulators
IT & Operations
Data Scientists
Business Owner
Can I trust our AI
decisions?
Are these AI system
decisions fair?
Customer Support
How do I answer this
customer complaint?
How do I monitor and
debug this model?
Is this the best model
that can be built?
Black-box
AI
Why I am getting this
decision?
How can I get a better
decision?
Poor Decision
Black-box AI creates confusion and doubt
23. Immigration Reform and Control Act
Citizenship
Rehabilitation Act of 1973;
Americans with Disabilities Act
of 1990
Disability status
Civil Rights Act of 1964
Race
Age Discrimination in Employment Act
of 1967
Age
Equal Pay Act of 1963;
Civil Rights Act of 1964
Sex
And more...
Why Explainability: Laws against Discrimination
23
25. GDPR Concerns Around Lack of Explainability in AI
“
Companies should commit to ensuring
systems that could fall under GDPR, including
AI, will be compliant. The threat of sizeable
fines of €20 million or 4% of global
turnover provides a sharp incentive.
Article 22 of GDPR empowers individuals with
the right to demand an explanation of how
an AI system made a decision that affects
them.
”
- European Commision
VP, European Commision
28. Why Explainability: Growing Global AI Regulation
● GDPR: Article 22 empowers individuals with the right to demand an explanation of how an
automated system made a decision that affects them.
● Algorithmic Accountability Act 2019: Requires companies to provide an assessment of the risks posed by
the automated decision system to the privacy or security and the risks that contribute to inaccurate,
unfair, biased, or discriminatory decisions impacting consumers
● California Consumer Privacy Act: Requires companies to rethink their approach to capturing,
storing, and sharing personal data to align with the new requirements by January 1, 2020.
● Washington Bill 1655: Establishes guidelines for the use of automated decision systems to
protect consumers, improve transparency, and create more market predictability.
● Massachusetts Bill H.2701: Establishes a commission on automated decision-making,
transparency, fairness, and individual rights.
● Illinois House Bill 3415: States predictive data analytics determining creditworthiness or hiring
decisions may not include information that correlates with the applicant race or zip code.
29. 29
SR 11-7 and OCC regulations for Financial Institutions
30. Model Diagnostics
Root Cause Analytics
Performance monitoring
Fairness monitoring
Model Comparison
Cohort Analysis
Explainable Decisions
API Support
Model Launch Signoff
Model Release Mgmt
Model Evaluation
Compliance Testing
Model Debugging
Model Visualization
Explainable
AI
Train
QA
Predict
Deploy
A/B Test
Monitor
Debug
Feedback Loop
“Explainability by Design” for AI products
32. LinkedIn operates the largest professional
network on the Internet
645M+ members
30M+
companies
are
represented
on LinkedIn
90K+
schools listed
(high school &
college)
35K+
skills listed
20M+
open jobs
on
LinkedIn
Jobs
280B
Feed updates
35. What is Explainable AI?
Data
Black-Box
AI
AI
product
Confusion with Today’s AI Black
Box
● Why did you do that?
● Why did you not do that?
● When do you succeed or fail?
● How do I correct an error?
Black Box AI
Decision,
Recommendation
Clear & Transparent Predictions
● I understand why
● I understand why not
● I know why you succeed or fail
● I understand, so I trust you
Explainable AI
Data
Explainable
AI
Explainable
AI Product
Decision
Explanation
Feedback
36. - Humans may have follow-up questions
- Explanations cannot answer all users’ concerns
Weld, D., and Gagan Bansal. "The challenge of crafting
intelligible intelligence." Communications of ACM (2018).
Example of an End-to-End XAI System
37. Neural Net
CNNGAN
RNN
Ensemble
Method
Random
Forest
XGB
Statistical
Model
AOG
SVM
Graphical Model
Bayesian
Belief Net
SLR
CRF HBN
MLN
Markov
Model
Decision
Tree
Linear
Model
Non-Linear
functions
Polynomial
functions
Quasi-Linear
functions
Accuracy
Explainability
InterpretabilityLearning
• Challenges:
• Supervised
• Unsupervised learning
• Approach:
• Representation Learning
• Stochastic selection
• Output:
• Correlation
• No causation
How to Explain? Accuracy vs. Explainability
40. KDD 2019 Tutorial on Explainable AI in Industry - https://sites.google.com/view/kdd19-explainable-ai-tutorial
Evaluation (1) - Perturbation-based Approaches
41. Evaluation criteria for Explanations [Miller, 2017]
○ Truth & probability
○ Usefulness, relevance
○ Coherence with prior belief
○ Generalization
Cognitive chunks = basic explanation units (for different explanation needs)
○ Which basic units for explanations?
○ How many?
○ How to compose them?
○ Uncertainty & end users?
[Doshi-Velez and Kim 2017, Poursabzi-Sangdeh
18]
Evaluation (2) - Human (Role)-based Evaluation is
Essential… but too often based on size!
42. Comprehensibilit
y
How much effort
for correct human
interpretation?
Succinctness
How concise and
compact is the
explanation?
Actionability
What can one
action, do with
the explanation?
Reusability
Could the
explanation be
personalized?
Accuracy
How accurate and
precise is the
explanation?
Completeness
Is the explanation
complete, partial,
restricted?
Source: Accenture Point of View. Understanding Machines: Explainable AI. Freddy Lecue, Dadong Wan
Evaluation (3) - XAI: One Objective, Many Metrics
45. How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
Artificial
Intelligence
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
46. Which features are responsible of
classification?
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
47. Which features are responsible of
classification?
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Which complex features are
responsible of classification?
Saliency Map
MAS
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
48. Which features are responsible of
classification?
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
49. Which actions are
responsible of a plan?
Which features are responsible of
classification?
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
50. Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
51. Which combination of
features is optimal?
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Shapely
Values
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
52. Which combination of
features is optimal?
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Shapely
Values
Narrative-based
Which decisions, combination of
multimodal decisions lead to an action?
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
53. Which combination of
features is optimal?
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Shapely
Values
Narrative-based
Which decisions, combination of
multimodal decisions lead to an action? Which entity is responsible for
classification?
Machine Learning based
Uncertainty Map
MAS
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
54. Which complex features are
responsible of classification?
Which actions are
responsible of a plan?
Which entity is responsible for
classification?
Which combination of
features is optimal?
Which constraints can be relaxed?
Which features are responsible of
classification?
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Surrogate
Model
Dependency
Plot
Feature
Importance
Shapely
Values
Uncertainty Map
Saliency Map
Conflicts
Resolution
Abduction
Diagnosis
Plan Refinement
Strategy
Summarization
Machine Learning based
Narrative-based
Robotics
• Which axiom is responsible of
inference (e.g., classification)?
• Abduction/Diagnostic: Find the right
root causes (abduction)?
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Which decisions, combination of
multimodal decisions lead to an action?
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
55. Uncertainty as an
alternative to
explanation
Which complex features are
responsible of classification?
Which actions are
responsible of a plan?
Which entity is responsible for
classification?
Which combination of
features is optimal?
Which constraints can be relaxed?
Which features are responsible of
classification?
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Surrogate
Model
Dependency
Plot
Feature
Importance
Shapely
Values
Uncertainty Map
Saliency Map
Conflicts
Resolution
Abduction
Diagnosis
Plan Refinement
Strategy
Summarization
Machine Learning based
Narrative-based
Robotics
• Which axiom is responsible of
inference (e.g., classification)?
• Abduction/Diagnostic: Find the right
root causes (abduction)?
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Which decisions, combination of
multimodal decisions lead to an action?
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
56. Feature Importance
Partial Dependence Plot
Individual Conditional Expectation
Sensitivity Analysis
Naive Bayes model
Igor Kononenko. Machine learning for medical diagnosis:
history, state of the art and perspective. Artificial Intelligence
in Medicine, 23:89–109, 2001.
Counterfactual
What-if
Brent D. Mittelstadt, Chris
Russell, Sandra Wachter:
Explaining Explanations in AI.
FAT 2019: 279-288
Rory Mc Grath, Luca Costabello,
Chan Le Van, Paul Sweeney,
Farbod Kamiab, Zhao Shen,
Freddy Lécué: Interpretable Credit
Application Predictions With
Counterfactual Explanations.
CoRR abs/1811.05245 (2018)
Interpretable Models:
• Decision Trees, Lists and
Sets,
• GAMs,
• GLMs,
• Linear regression,
• Logistic regression,
• KNNs
Overview of Explanation in Machine Learning (1)
57. Auto-encoder / Prototype
Oscar Li, Hao Liu, Chaofan Chen, Cynthia Rudin: Deep
Learning for Case-Based Reasoning Through Prototypes: A
Neural Network That Explains Its Predictions. AAAI 2018:
3530-3537
Surogate Model
Mark Craven, Jude W. Shavlik: Extracting Tree-Structured
Representations of Trained Networks. NIPS 1995: 24-30
Attribution for Deep
Network (Integrated gradient-based)
Mukund Sundararajan, Ankur Taly, and Qiqi Yan.
Axiomatic attribution for deep networks. In ICML,
pp. 3319–3328, 2017.
Attention Mechanism
Avanti Shrikumar, Peyton Greenside, Anshul
Kundaje: Learning Important Features Through
Propagating Activation Differences. ICML 2017:
3145-3153
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine
translation by jointly learning to align and translate.
International Conference on Learning Representations,
2015
Edward Choi, Mohammad Taha Bahadori, Jimeng Sun,
Joshua Kulas, Andy Schuetz, Walter F. Stewart: RETAIN: An
Interpretable Predictive Model for Healthcare using
Reverse Time Attention Mechanism. NIPS 2016: 3504-
3512
Chaofan Chen, Oscar Li, Alina Barnett, Jonathan Su, Cynthia
Rudin: This looks like that: deep learning for interpretable
image recognition. CoRR abs/1806.10574 (2018)
Overview of Explanation in Machine Learning (2)
●Artificial Neural Network
58. Uncertainty Map
Saliency Map
Alex Kendall, Yarin Gal: What Uncertainties Do We Need in Bayesian Deep Learning for
Computer Vision? NIPS 2017: 5580-5590
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, Been Kim:
Sanity Checks for Saliency Maps. NeurIPS 2018: 9525-9536
Visual Explanation
Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele,
Trevor Darrell: Generating Visual Explanations. ECCV (4) 2016: 3-19
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba:
Network Dissection: Quantifying Interpretability of Deep Visual
Representations. CVPR 2017: 3319-3327
Interpretable Units
Overview of Explanation in Machine Learning (3)
●Computer Vision
59. Shapley Additive Explanation
Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768-
4777
Overview of Explanation in Different AI Fields (1)
●Game Theory
60. Shapley Additive Explanation
Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768-
4777
L-Shapley and C-Shapley (with graph structure)
Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan: L-Shapley and C-
Shapley: Efficient Model Interpretation for Structured Data. ICLR 2019
Overview of Explanation in Different AI Fields (1)
●Game Theory
61. Shapley Additive Explanation
Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768-
4777
L-Shapley and C-Shapley (with graph structure)
Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan: L-Shapley and C-
Shapley: Efficient Model Interpretation for Structured Data. ICLR 2019
instance-wise feature
importance (causal
influence)
Erik Štrumbelj and Igor Kononenko. An efficient
explanation of individual classifications using
game theory. Journal of Machine Learning
Research, 11:1–18, 2010.
Anupam Datta, Shayak Sen, and Yair Zick.
Algorithmic transparency via quantitative input
influence: Theory and experiments with
learning systems. In Security and Privacy (SP),
2016 IEEE Symposium on, pp. 598–617. IEEE,
2016.
Overview of Explanation in Different AI Fields (1)
●Game Theory
62. Conflicts resolution
Barry O'Sullivan, Alexandre Papadopoulos, Boi Faltings, Pearl Pu: Representative Explanations for
Over-Constrained Problems. AAAI 2007: 323-328
Robustness Computation
Hebrard, E., Hnich, B., & Walsh, T. (2004, July). Robust solutions for constraint satisfaction and
optimization. In ECAI (Vol. 16, p. 186).
If A+1 then NEW Conflicts
on X and Y
A
• Search and Constraint Satisfaction
Overview of Explanation in Different AI Fields (2)
63. Conflicts resolution
Barry O'Sullivan, Alexandre Papadopoulos, Boi Faltings, Pearl Pu: Representative Explanations for
Over-Constrained Problems. AAAI 2007: 323-328
Constraints
relaxation
Ulrich Junker: QUICKXPLAIN: Preferred Explanations and
Relaxations for Over-Constrained Problems. AAAI 2004:
167-172
Robustness Computation
Hebrard, E., Hnich, B., & Walsh, T. (2004, July). Robust solutions for constraint satisfaction and
optimization. In ECAI (Vol. 16, p. 186).
If A+1 then NEW Conflicts
on X and Y
A
• Search and Constraint Satisfaction
Overview of Explanation in Different AI Fields (2)
64. Explaining Reasoning (through Justification) e.g., Subsumption
Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1)
1995: 816-821
• Knowledge Representation and Reasoning
Overview of Explanation in Different AI Fields (3)
65. Explaining Reasoning (through Justification) e.g., Subsumption
Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1)
1995: 816-821
Abduction Reasoning (in Bayesian
Network)
David Poole: Probabilistic Horn Abduction and Bayesian
Networks. Artif. Intell. 64(1): 81-129 (1993)
• Knowledge Representation and Reasoning
Overview of Explanation in Different AI Fields (3)
66. Explaining Reasoning (through Justification) e.g., Subsumption
Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1)
1995: 816-821
Diagnosis Inference
Alban Grastien, Patrik Haslum, Sylvie Thiébaux: Conflict-Based
Diagnosis of Discrete Event Systems: Theory and Practice. KR
2012
Abduction Reasoning (in Bayesian
Network)
David Poole: Probabilistic Horn Abduction and Bayesian
Networks. Artif. Intell. 64(1): 81-129 (1993)
• Knowledge Representation and Reasoning
Overview of Explanation in Different AI Fields (3)
67. ●Multi-agent Systems
Explanation of Agent Conflicts & Harmful
Interactions
Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A.
Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents
and Multi-Agent Systems 7(1-2): 29-48 (2003)
• Multi-agent Systems
Overview of Explanation in Different AI Fields (4)
68. ●Multi-agent Systems
Agent Strategy Summarization
Ofra Amir, Finale Doshi-Velez, David Sarne: Agent Strategy Summarization. AAMAS 2018: 1203-1207
Explanation of Agent Conflicts & Harmful
Interactions
Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A.
Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents
and Multi-Agent Systems 7(1-2): 29-48 (2003)
• Multi-agent Systems
Overview of Explanation in Different AI Fields (4)
69. ●Multi-agent Systems
Agent Strategy Summarization
Ofra Amir, Finale Doshi-Velez, David Sarne: Agent Strategy Summarization. AAMAS 2018: 1203-1207
Explanation of Agent Conflicts & Harmful
Interactions
Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A.
Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents
and Multi-Agent Systems 7(1-2): 29-48 (2003)
Explainable Agents
Joost Broekens, Maaike Harbers, Koen V. Hindriks, Karel van
den Bosch, Catholijn M. Jonker, John-Jules Ch. Meyer: Do
You Get It? User-Evaluated Explainable BDI Agents. MATES
2010: 28-39
W. Lewis Johnson: Agents that Learn to
Explain Themselves. AAAI 1994: 1257-
1263
• Multi-agent Systems
Overview of Explanation in Different AI Fields (4)
70. Explainable NLP
Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative
Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018)
Fine-grained
explanations are in
the form of:
• texts in a real-
world dataset;
• Numerical scores
• NLP
Overview of Explanation in Different AI Fields (5)
71. LIME for NLP
Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?":
Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144
Explainable NLP
Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative
Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018)
Fine-grained
explanations are in
the form of:
• texts in a real-
world dataset;
• Numerical scores
• NLP
Overview of Explanation in Different AI Fields (5)
72. LIME for NLP
Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?":
Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144
Explainable NLP
Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative
Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018)
Fine-grained
explanations are in
the form of:
• texts in a real-
world dataset;
• Numerical scores
Hendrik Strobelt, Sebastian
Gehrmann, Michael Behrisch, Adam
Perer, Hanspeter Pfister, Alexander M.
Rush: Seq2seq-Vis: A Visual Debugging
Tool for Sequence-to-Sequence
Models. IEEE Trans. Vis. Comput.
Graph. 25(1): 353-363 (2019)
NLP Debugger
Hendrik Strobelt, Sebastian
Gehrmann, Hanspeter Pfister,
Alexander M. Rush: LSTMVis: A Tool
for Visual Analysis of Hidden State
Dynamics in Recurrent Neural
Networks. IEEE Trans. Vis. Comput.
Graph. 24(1): 667-676 (2018)
• NLP
Overview of Explanation in Different AI Fields (5)
73. ●Planning and Scheduling
XAI Plan
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
• Planning and Scheduling
Overview of Explanation in Different AI Fields (6)
74. ●Planning and Scheduling
XAI Plan
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
Human-in-the-loop Planning
Maria Fox, Derek Long, Daniele Magazzeni: Explainable Planning. CoRR
abs/1709.10256 (2017)
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
(Manual) Plan Comparison
• Planning and Scheduling
Overview of Explanation in Different AI Fields (6)
75. Narration of Autonomous Robot Experience
Stephanie Rosenthal, Sai P Selvaraj, and Manuela Veloso. Verbalization: Narration of autonomous
robot experience. In IJCAI, pages 862–868. AAAI Press, 2016.
Daniel J Brooks et al. 2010. Towards State Summarization for Autonomous Robots.. In AAAI Fall
Symposium: Dialog with Robots, Vol. 61. 62.
• Robotics
Overview of Explanation in Different AI Fields (7)
76. Narration of Autonomous Robot Experience
Stephanie Rosenthal, Sai P Selvaraj, and Manuela Veloso. Verbalization: Narration of autonomous
robot experience. In IJCAI, pages 862–868. AAAI Press, 2016.
From Decision Tree to human-friendly
information
Raymond Ka-Man Sheh: "Why Did You Do That?" Explainable Intelligent
Robots. AAAI Workshops 2017
Daniel J Brooks et al. 2010. Towards State Summarization for Autonomous Robots.. In AAAI Fall
Symposium: Dialog with Robots, Vol. 61. 62.
Overview of Explanation in Different AI Fields (7)
• Robotics
77. Probabilistic Graphical Models
Daphne Koller, Nir Friedman: Probabilistic Graphical Models - Principles and Techniques. MIT
Press 2009, ISBN 978-0-262-01319-2, pp. I-XXXV, 1-1231
• Reasoning under Uncertainty
Overview of Explanation in Different AI Fields (8)
79. Achieving Explainable AI
Approach 1: Post-hoc explain a given AI model
● Individual prediction explanations in terms of input features, influential examples,
concepts, local decision rules
● Global prediction explanations in terms of entire model in terms of partial
dependence plots, global feature importance, global decision rules
Approach 2: Build an interpretable model
● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive
Models (GAMs)
79
81. Achieving Explainable AI
Approach 1: Post-hoc explain a given AI model
● Individual prediction explanations in terms of input features, influential examples,
concepts, local decision rules
● Global prediction explanations in terms of entire model in terms of partial
dependence plots, global feature importance, global decision rules
Approach 2: Build an interpretable model
● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive
Models (GAMs)
81
84. Credit Line Increase
Fair lending laws [ECOA, FCRA] require credit decisions to be explainable
Bank Credit Lending Model
Why? Why not?
How?
? Request Denied
Query AI System
Credit Lending Score = 0.3
Credit Lending in a black-box ML world
85. Attribute a model’s prediction on an input to features of the input
Examples:
● Attribute an object recognition network’s prediction to its pixels
● Attribute a text sentiment network’s prediction to individual words
● Attribute a lending model’s prediction to its features
A reductive formulation of “why this prediction” but surprisingly useful
The Attribution Problem
86. Application of Attributions
● Debugging model predictions
E.g., Attribution an image misclassification to the pixels responsible for it
● Generating an explanation for the end-user
E.g., Expose attributions for a lending prediction to the end-user
● Analyzing model robustness
E.g., Craft adversarial examples using weaknesses surfaced by attributions
● Extract rules from the model
E.g., Combine attribution to craft rules (pharmacophores) capturing prediction
logic of a drug screening network
86
87. Next few slides
We will cover the following attribution methods**
● Ablations
● Gradient based methods (specific to differentiable models)
● Score Backpropagation based methods (specific to NNs)
We will also discuss game theory (Shapley value) in attributions
**Not a complete list!
See Ancona et al. [ICML 2019], Guidotti et al. [arxiv 2018] for a comprehensive survey 87
88. Ablations
Drop each feature and attribute the change in prediction to that feature
Pros:
● Simple and intuitive to interpret
Cons:
● Unrealistic inputs
● Improper accounting of interactive features
● Can be computationally expensive
88
89. Feature*Gradient
Attribution to a feature is feature value times gradient, i.e., xi* 𝜕y/𝜕
xi
● Gradient captures sensitivity of output w.r.t. feature
● Equivalent to Feature*Coefficient for linear models
○ First-order Taylor approximation of non-linear models
● Popularized by SaliencyMaps [NIPS 2013], Baehrens et al. [JMLR 2010]
89
Gradients in the
vicinity of the input
seem like noise?
90. Local linear approximations can be too local
90
score
“fireboat-ness” of image
Interesting gradients
uninteresting gradients
(saturation)
1.0
0.0
91. Score Back-Propagation based Methods
Re-distribute the prediction score through the neurons in the network
● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014]
Easy case: Output of a neuron is a linear function
of previous neurons (i.e., ni = ⅀ wij * nj)
e.g., the logit neuron
● Re-distribute the contribution in proportion to
the coefficients wij
91
Image credit heatmapping.org
92. Score Back-Propagation based Methods
Re-distribute the prediction score through the neurons in the network
● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014]
Tricky case: Output of a neuron is a non-linear
function, e.g., ReLU, Sigmoid, etc.
● Guided BackProp: Only consider ReLUs that
are on (linear regime), and which contribute
positively
● LRP: Use first-order Taylor decomposition to
linearize activation function
● DeepLift: Distribute activation difference
relative a reference point in proportion to
edge weights 92
Image credit heatmapping.org
93. Score Back-Propagation based Methods
Re-distribute the prediction score through the neurons in the network
● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014]
Pros:
● Conceptually simple
● Methods have been empirically validated to
yield sensible result
Cons:
● Hard to implement, requires instrumenting
the model
● Often breaks implementation invariance
Think: F(x, y, z) = x * y *z and
G(x, y, z) = x * (y * z)
Image credit heatmapping.org
94. Baselines and additivity
● When we decompose the score via backpropagation, we imply a normative
alternative called a baseline
○ “Why Pr(fireboat) = 0.91 [instead of 0.00]”
● Common choice is an informationless input for the model
○ E.g., Black image for image models
○ E.g., Empty text or zero embedding vector for text models
● Additive attributions explain F(input) - F(baseline) in terms of input features
100. Detecting an architecture bug
● Deep network [Kearns, 2016] predicts if a molecule binds to certain DNA site
● Finding: Some atoms had identical attributions despite different connectivity
101. ● Deep network [Kearns, 2016] predicts if a molecule binds to certain DNA site
● Finding: Some atoms had identical attributions despite different connectivity
Detecting an architecture bug
● Bug: The architecture had a bug due to which the convolved bond features
did not affect the prediction!
102. ● Deep network predicts various diseases from chest x-rays
Original image
Integrated gradients
(for top label)
Detecting a data issue
103. ● Deep network predicts various diseases from chest x-rays
● Finding: Attributions fell on radiologist’s markings (rather than the pathology)
Original image
Integrated gradients
(for top label)
Detecting a data issue
105. Classic result in game theory on distributing gain in a coalition game
● Coalition Games
○ Players collaborating to generate some gain (think: revenue)
○ Set function v(S) determining the gain for any subset S of players
Shapley Value [Annals of Mathematical studies,1953]
106. Classic result in game theory on distributing gain in a coalition game
● Coalition Games
○ Players collaborating to generate some gain (think: revenue)
○ Set function v(S) determining the gain for any subset S of players
● Shapley Values are a fair way to attribute the total gain to the players based on
their contributions
○ Concept: Marginal contribution of a player to a subset of other players (v(S U {i}) - v(S))
○ Shapley value for a player is a specific weighted aggregation of its marginal over all
possible subsets of other players
Shapley Value for player i = ⅀S⊆N w(S) * (v(S U {i}) - v(S))
(where w(S) = N! / |S|! (N - |S| -1)!)
Shapley Value [Annals of Mathematical studies, 1953]
107. Shapley values are unique under four simple axioms
● Dummy: If a player never contributes to the game then it must receive zero attribution
● Efficiency: Attributions must add to the total gain
● Symmetry: Symmetric players must receive equal attribution
● Linearity: Attribution for the (weighted) sum of two games must be the same as the
(weighted) sum of the attributions for each of the games
Shapley Value Justification
108. SHAP [NeurIPS 2018], QII [S&P 2016], Strumbelj & Konenko [JMLR 2009]
● Define a coalition game for each model input X
○ Players are the features in the input
○ Gain is the model prediction (output), i.e., gain = F(X)
● Feature attributions are the Shapley values of this game
Shapley Values for Explaining ML models
109. SHAP [NeurIPS 2018], QII [S&P 2016], Strumbelj & Konenko [JMLR 2009]
● Define a coalition game for each model input X
○ Players are the features in the input
○ Gain is the model prediction (output), i.e., gain = F(X)
● Feature attributions are the Shapley values of this game
Challenge: Shapley values require the gain to be defined for all subsets of players
● What is the prediction when some players (features) are absent?
i.e., what is F(x_1, <absent>, x_3, …, <absent>)?
Shapley Values for Explaining ML models
110. Key Idea: Take the expected prediction when the (absent) feature is sampled
from a certain distribution.
Different approaches choose different distributions
● [SHAP, NIPS 2018] Use conditional distribution w.r.t. the present features
● [QII, S&P 2016] Use marginal distribution
● [Strumbelj et al., JMLR 2009] Use uniform distribution
Modeling Feature Absence
Preprint: The Explanation Game: Explaining Machine Learning Models with Cooperative Game
Theory
111. Exact Shapley value computation is exponential in the number of features
● Shapley values can be expressed as an expectation of marginals
𝜙(i) = ES ~ D [marginal(S, i)]
● Sampling-based methods can be used to approximate the expectation
● See: “Computational Aspects of Cooperative Game Theory”, Chalkiadakis et al. 2011
● The method is still computationally infeasible for models with hundreds of
features, e.g., image models
Computing Shapley Values
112. ● Values of Non-Atomic Games (1974): Aumann and Shapley extend their
method → players can contribute fractionally
● Aumann-Shapley values calculated by integrating along a straight-line path…
same as Integrated Gradients!
● IG through a game theory lens: continuous game, feature absence is modeled
by replacement with a baseline value
● Axiomatically justified as a result:
○ Integrated Gradients is the unique path-integral method satisfying: Sensitivity, Insensitivity,
Linearity preservation, Implementation invariance, Completeness, and Symmetry
Non-atomic Games: Aumann-Shapley Values and IG
113. Baselines (or Norms) are essential to explanations [Kahneman-Miller 86]
● E.g., A man suffers from indigestion. Doctor blames it to a stomach ulcer. Wife blames
it on eating turnips. Both are correct relative to their baselines.
● The baseline may also be an important analysis knob.
Attributions are contrastive, whether we think about it or not.
Lesson learned: baselines are important
115. Some things that are missing:
● Feature interactions (ignored or averaged out)
● What training examples influenced the prediction (training agnostic)
● Global properties of the model (prediction-specific)
An instance where attributions are useless:
● A model that predicts TRUE when there are even number of black pixels and
FALSE otherwise
Attributions don’t explain everything
116. Attributions are for human consumption
Naive scaling of attributions
from 0 to 255
Attributions have a large
range and long tail
across pixels
After clipping attributions
at 99% to reduce range
● Humans interpret attributions and generate insights
○ Doctor maps attributions for x-rays to pathologies
● Visualization matters as much as the attribution technique
120. Influence functions
● Trace a model’s prediction through the learning algorithm and
back to its training data
● Training points “responsible” for a given prediction
120
Figure credit: Understanding Black-box Predictions via Influence Functions. Koh and Liang. ICML 2017
121. Example based Explanations
121
● Prototypes: Representative of all the training data.
● Criticisms: Data instance that is not well represented by the set of prototypes.
Figure credit: Examples are not Enough, Learn to Criticize! Criticism for Interpretability. Kim, Khanna and Koyejo. NIPS 2016
Learned prototypes and criticisms from Imagenet dataset (two types of dog breeds)
123. Global Explanations Methods
● Partial Dependence Plot: Shows
the marginal effect one or two
features have on the predicted
outcome of a machine learning model
123
124. Global Explanations Methods
● Permutations: The importance of a feature is the increase in the prediction error of the model
after we permuted the feature’s values, which breaks the relationship between the feature and the
true outcome.
124
125. Achieving Explainable AI
Approach 1: Post-hoc explain a given AI model
● Individual prediction explanations in terms of input features, influential examples,
concepts, local decision rules
● Global prediction explanations in terms of entire model in terms of partial
dependence plots, global feature importance, global decision rules
Approach 2: Build an interpretable model
● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive
Models (GAMs)
125
126. Decision Trees
126
Is the person fit?
Age < 30 ?
Eats a lot of pizzas?
Exercises in the morning?
Unfit UnfitFit Fit
Yes No
Yes
Yes No
No
127. Decision Set
127
Figure credit: Interpretable Decision Sets: A Joint Framework for Description and Prediction, Lakkaraju, Bach,
Leskovec
129. Decision List
129
Figure credit: Interpretable Decision Sets: A Joint Framework for Description and Prediction, Lakkaraju, Bach,
Leskovec
130. Falling Rule List
A falling rule list is an ordered list of if-then rules (falling rule lists are a type of
decision list), such that the estimated probability of success decreases
monotonically down the list. Thus, a falling rule list directly contains the decision-
making process, whereby the most at-risk observations are classified first, then
the second set, and so on.
130
131. Box Drawings for Rare Classes
131
Figure credit: Box Drawings for Learning with Imbalanced. Data Siong Thye Goh and Cynthia Rudin
132. Supersparse Linear Integer Models for Optimized
Medical Scoring Systems
Figure credit: Supersparse Linear Integer Models for Optimized Medical Scoring Systems. Berk Ustun and Cynthia Rudin
132
134. GLMs and GAMs
134
Intelligible Models for Classification and Regression. Lou, Caruana and Gehrke KDD 2012
Accurate Intelligible Models with Pairwise Interactions. Lou, Caruana, Gehrke and Hooker. KDD 2013
135. Explainable Machine Learning
(from a Knowledge Graph Perspective)
135
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
136. August 28th, 2019 Tutorial on Explainable AI 136
Knowledge Graph (1)
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
137. August 28th, 2019 Tutorial on Explainable AI 137
Knowledge Graph (2)
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
138. August 28th, 2019 Tutorial on Explainable AI 138
Knowledge Graph Construction
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
141. Training
Data
Input
(unlabeled
image)
Neurons respond
to simple shapes
Neurons respond to
more complex
structures
Neurons respond to
highly complex,
abstract concepts
1st Layer
2nd Layer
nth Layer
Low-level
features to
high-level
features
Knowledge Graph in Machine Learning (3)
Augmenting (intermediate)
features with more semantics
such as knowledge graph
embeddings / entities
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
142. Training
Data
Input
(unlabeled
image)
Neurons respond
to simple shapes
Neurons respond to
more complex
structures
Neurons respond to
highly complex,
abstract concepts
1st Layer
2nd Layer
nth Layer
Low-level
features to
high-level
features
Knowledge Graph in Machine Learning (4)
Augmenting (input,
intermediate) features –
output relationship with more
semantics to capture causal
relationship
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
143. Knowledge Graph in Machine Learning (5)
Description 1: This is an orange train accident
Description 2: This is a train accident between two speed
merchant trains of characteristics X43-B and Y33-C in a dry
environment
Description 3: This is a public transportation accident
Augmenting models with
semantics to support
personalized explanation
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
144. Knowledge Graph in Machine Learning (6)
“How to explain transfer learning with
appropriate knowledge representation?
Proceedings of the Sixteenth International Conference on Principles of
Knowledge Representation and Reasoning (KR 2018)
Knowledge-Based Transfer Learning Explanation
Huajun Chen
College of Computer Science, Zhejiang University, China
Alibaba-Zhejian University Frontier Technology Research Center
Jiaoyan Chen
Department of Computer Science
University of Oxford, UK
Freddy Lecue
INRIA, France
Accenture Labs, Ireland
Jeff Z. Pan
Department of Computer Science
University of Aberdeen, UK
Ian Horrocks
Department of Computer Science
University of Oxford, UK
Augmenting input features and
domains with semantics to
support interpretable transfer
learning
157. • Hardware: High performance, scalable, generic (to different
FGPA family) & portable CNN dedicated programmable
processor implemented on an FPGA for real-time embedded
inference
• Software: Knowledge graph extension of object detection
Transitionin
g
This is an Obstacle: Boulder obstructing the train:
XG142-R on Rail_Track from City: Cannes to City:
Marseille at Location: Tunnel VIX due to Landslide
158. Tunnel - .74
Boulder - .81
Railway - .90
Rail
Trac
k
Boulder
Trai
n
operatin
g
on
Obstacle
Tunne
l
obstructing
Landslide
159. Freddy Lécué, Jiaoyan Chen, Jeff Z. Pan,
Huajun Chen: Augmenting Transfer
Learning with Semantic Reasoning. IJCAI
2019: 1779-1785
Freddy Lécué, Tanguy Pommellet: Feeding
Machine Learning with Knowledge Graphs
for Explainable Object Detection. ISWC
Satellites 2019: 277-280
Freddy Lécué, Baptiste Abeloos, Jonathan
Anctil, Manuel Bergeron, Damien Dalla-
Rosa, Simon Corbeil-Letourneau, Florian
Martet, Tanguy Pommellet, Laura Salvan,
Simon Veilleux, Maryam Ziaeefard: Thales
XAI Platform: Adaptable Explanation of
Machine Learning Systems - A Knowledge
Graphs Perspective. ISWC Satellites 2019:
315-316
Jiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian
Horrocks, Huajun Chen: Knowledge-Based
Transfer Learning Explanation. KR 2018:
349-358
Knowledge Graph in Machine Learning - An Implementation
160. XAI Case Studies in Industry:
Applications, Lessons Learned, and Research Challenges
160
161. Challenge: Object detection is usually performed from a
large portfolio of Artificial Neural Networks (ANNs)
architectures trained on large amount of labelled data.
Explaining object detections is rather difficult due to the
high complexity of the most accurate ANNs.
AI Technology: Integration of AI related technologies
i.e., Machine Learning (Deep Learning / CNNs), and
knowledge graphs / linked open data.
XAI Technology: Knowledge graphs and Artificial
Neural Networks
Explainable Boosted Object Detection – Industry Agnostic
162. Context
● Explanation in Machine Learning systems has been identified to be
the one asset to have for large scale deployment of Artificial
Intelligence (AI) in critical systems
● Explanations could be example-based (who is similar), features-based
(what is driving decision), or even counterfactual (what-if scenario) to
potentially action on an AI system; they could be represented in many
different ways e.g., textual, graphical, visual
Goal
● All representations serve different means, purpose and operators. We
designed the first-of-its-kind XAI platform for critical systems i.e., the
Thales Explainable AI Platform which aims at serving explanations
through various forms
Approach: Model-Agnostic
● [AI:ML] Grad-Cam, Shapley, Counter-factual, Knowledge graph
Thales XAI
Platform
163.
164. Challenge: Designing Artificial Neural Network
architectures requires lots of experimentation
(i.e., training phases) and parameters tuning
(optimization strategy, learning rate, number of
layers…) to reach optimal and robust machine
learning models.
AI Technology: Artificial Neural Network
XAI Technology: Artificial Neural Network, 3D
Modeling and Simulation Platform For AI
Debugging Artificial Neural Networks – Industry Agnostic
Zetane.com
165.
166. Challenge: Public transportation is getting more and more
self-driving vehicles. Even if trains are getting more and more
autonomous, the human stays in the loop for critical decision,
for instance in case of obstacles. In case of obstacles trains
are required to provide recommendation of action i.e., go on
or go back to station. In such a case the human is required to
validate the recommendation through an explanation exposed
by the train or machine.
AI Technology: Integration of AI related technologies i.e.,
Machine Learning (Deep Learning / CNNs), and semantic
segmentation.
XAI Technology: Deep learning and Epistemic uncertainty
Obstacle Identification Certification (Trust) - Transportation
168. Challenge: Globally 323,454 flights are delayed every year.
Airline-caused delays totaled 20.2 million minutes last year,
generating huge cost for the company. Existing in-house
technique reaches 53% accuracy for predicting flight delay,
does not provide any time estimation (in minutes as opposed
to True/False) and is unable to capture the underlying
reasons (explanation).
AI Technology: Integration of AI related technologies i.e.,
Machine Learning (Deep Learning / Recurrent neural
Network), Reasoning (through semantics-augmented case-
based reasoning) and Natural Language Processing for
building a robust model which can (1) predict flight delays in
minutes, (2) explain delays by comparing with historical
cases.
XAI Technology: Knowledge graph embedded Sequence
Learning using LSTMsJiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian Horrocks, Huajun Chen: Knowledge-Based Transfer
Learning Explanation. KR 2018: 349-358
Nicholas McCarthy, Mohammad Karzand, Freddy Lecue: Amsterdam to Dublin Eventually Delayed?
LSTM and Transfer Learning for Predicting Delays of Low Cost Airlines: AAAI 2019
Explainable On-Time Performance - Transportation
170. Challenge: Predicting and explaining abnormally employee expenses (as high accommodation price in 1000+ cities).
AI Technology: Various techniques have been matured over the last two decades to achieve excellent results. However most methods address the problem
from a statistic and pure data-centric angle, which in turn limit any interpretation. We elaborated a web application running live with real data from (i) travel and
expenses from Accenture, (ii) external data from third party such as Google Knowledge Graph, DBPedia (relational DataBase version of Wikipedia) and social
events from Eventful, for explaining abnormalities.
XAI Technology: Knowledge graph embedded Ensemble Learning
Freddy Lécué, Jiewen Wu: Explaining and predicting abnormal
expenses at large scale using knowledge graph based
reasoning. J. Web Sem. 44: 89-103 (2017)
Explainable Anomaly Detection – Finance (Compliance)
171. Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, Freddy Lécué: Interpretable Credit Application Predictions With Counterfactual Explanations.
FEAP-AI4fin workshop, NeurIPS, 2018.
Counterfactual Explanations for Credit Decisions (3) - Finance
172. Challenge: Explaining medical condition relapse in the
context of oncology.
AI Technology: Relational learning
XAI Technology: Knowledge graphs and Artificial
Neural Networks
Explanation of Medical Condition Relapse – Health
Knowledge graph
parts explaining
medical condition
relapse
180. Representative Ranking for Talent Search
S. C. Geyik, S. Ambler,
K. Kenthapadi, Fairness-Aware
Ranking in Search &
Recommendation Systems with
Application to LinkedIn Talent
Search, KDD’19.
[Microsoft’s AI/ML
conference
(MLADS’18). Distinguished
Contribution Award]
Building Representative
Talent Search at LinkedIn
(LinkedIn engineering blog)
181. Intuition for Measuring and Achieving Representativeness
Ideal: Top ranked results should follow a desired distribution on
gender/age/…
E.g., same distribution as the underlying talent pool
Inspired by “Equal Opportunity” definition [Hardt et al, NIPS’16]
Defined measures (skew, divergence) based on this intuition
182. Desired Proportions within the Attribute of Interest
Compute the proportions of the values of the attribute (e.g., gender,
gender-age combination) amongst the set of qualified candidates
● “Qualified candidates” = Set of candidates that match the
search query criteria
● Retrieved by LinkedIn’s Galene search engine
Desired proportions could also be obtained based on legal
mandate / voluntary commitment
183. Fairness-aware Reranking Algorithm (Simplified)
Partition the set of potential candidates into different buckets for
each attribute value
Rank the candidates in each bucket according to the scores
assigned by the machine-learned model
Merge the ranked lists, balancing the representation requirements
and the selection of highest scored candidates
Representation requirement: Desired distribution on gender/age/…
Algorithmic variants based on how we achieve this balance
184. Validating Our Approach
Gender Representativeness
● Over 95% of all searches are representative compared to the
qualified population of the search
Business Metrics
● A/B test over LinkedIn Recruiter users for two weeks
● No significant change in business metrics (e.g., # InMails sent
or accepted)
Ramped to 100% of LinkedIn Recruiter users worldwide
185. Lessons
learned
• Post-processing approach desirable
• Model agnostic
• Scalable across different model choices
for our application
• Acts as a “fail-safe”
• Robust to application-specific business
logic
• Easier to incorporate as part of existing
systems
• Build a stand-alone service or component
for post-processing
• No significant modifications to the existing
components
• Complementary to efforts to reduce bias from
training data & during model training
• Collaboration/consensus across key stakeholders
186. Acknowledgements
LinkedIn Talent Solutions Diversity team, Hire & Careers AI team, Anti-abuse AI team, Data Science
Applied Research team
Special thanks to Deepak Agarwal, Parvez Ahammad, Stuart Ambler, Kinjal Basu, Jenelle Bray, Erik
Buchanan, Bee-Chung Chen, Patrick Cheung, Gil Cottle, Cyrus DiCiccio, Patrick Driscoll, Carlos Faham,
Nadia Fawaz, Priyanka Gariba, Meg Garlinghouse, Gurwinder Gulati, Rob Hallman, Sara Harrington,
Joshua Hartman, Daniel Hewlett, Nicolas Kim, Rachel Kumar, Nicole Li, Heloise Logan, Stephen Lynch,
Divyakumar Menghani, Varun Mithal, Arashpreet Singh Mor, Tanvi Motwani, Preetam Nandy, Lei Ni,
Nitin Panjwani, Igor Perisic, Hema Raghavan, Romer Rosales, Guillaume Saint-Jacques, Badrul Sarwar,
Amir Sepehri, Arun Swami, Ram Swaminathan, Grace Tang, Ketan Thakkar, Sriram Vasudevan,
Janardhanan Vembunarayanan, James Verbus, Xin Wang, Hinkmond Wong, Ya Xu, Lin Yang, Yang Yang,
Chenhui Zhai, Liang Zhang, Yani Zhang
187. Engineering for Fairness in AI Lifecycle
Problem
Formation
Dataset
Construction
Algorithm
Selection
Training
Process
Testing
Process
Deployment
Feedback
Is an algorithm an
ethical solution to our
problem?
Does our data include enough
minority samples?
Are there missing/biased
features?
Do we need to apply debiasing
algorithms to preprocess our
data?
Do we need to include fairness
constraints in the function?
Have we evaluated the model
using relevant fairness
metrics?
Are we deploying our model
on a population that we did
not train/test on?
Are there unequal effects
across users?
Does the model encourage
feedback loops that can
produce increasingly unfair
outcomes?
Credit: K. Browne & J. Draper
188. Engineering for Fairness in AI Lifecycle
S.Vasudevan, K. Kenthapadi, FairScale: A Scalable Framework for Measuring Fairness in AI Applications, 2019
189. FairScale System Architecture [Vasudevan & Kenthapadi, 2019]
• Flexibility of Use
(Platform agnostic)
• Ad-hoc exploratory
analyses
• Deployment in offline
workflows
• Integration with ML
Frameworks
• Scalability
• Diverse fairness
metrics
• Conventional fairness
metrics
• Benefit metrics
• Statistical tests
190. Fairness-aware experimentation
[Saint-Jacques and Sepehri, KDD’19 Social Impact Workshop]
Imagine LinkedIn has 10 members.
Each of them has 1 session a day.
A new product increases sessions by +1 session per member on average.
Both of these are +1 session / member on average!
One is much more unequal than the other. We want to catch that.
192. LinkedIn Recruiter
● Recruiter Searches for Candidates
○ Standardized and free-text search criteria
● Retrieval and Ranking
○ Filter candidates using the criteria
○ Rank candidates in multiple levels using ML
models
192
193. Modeling Approaches
● Pairwise XGBoost
● GLMix
● DNNs via TensorFlow
● Optimization Criteria: inMail Accepts
○ Positive: inMail sent by recruiter, and positively responded by candidate
■ Mutual interest between the recruiter and the candidate
193
195. How We Utilize Feature Importances for GBDT
● Understanding feature digressions
○ Which a feature that was impactful no longer is?
○ Should we debug feature generation?
● Introducing new features in bulk and identifying effective ones
○ An activity feature for last 3 hours, 6 hours, 12 hours, 24 hours introduced (costly to compute)
○ Should we keep all such features?
● Separating the factors for that caused an improvement
○ Did an improvement come from a new feature, or a new labeling strategy, data source?
○ Did the ordering between features change?
● Shortcoming: A global view, not case by case
195
196. GLMix Models
● Generalized Linear Mixed Models
○ Global: Linear Model
○ Per-contract: Linear Model
○ Per-recruiter: Linear Model
● Lots of parameters overall
○ For a specific recruiter or contract the weights can be summed up
● Inherently explainable
○ Contribution of a feature is “weight x feature value”
○ Can be examined in a case-by-case manner as well
196
197. TensorFlow Models in Recruiter and Explaining Them
● We utilize the Integrated Gradients [ICML 2017] method
● How do we determine the baseline example?
○ Every query creates its own feature values for the same candidate
○ Query match features, time-based features
○ Recruiter affinity, and candidate affinity features
○ A candidate would be scored differently by each query
○ Cannot recommend a “Software Engineer” to a search for a “Forensic Chemist”
○ There is no globally neutral example for comparison!
197
198. Query-Specific Baseline Selection
● For each query:
○ Score examples by the TF model
○ Rank examples
○ Choose one example as the baseline
○ Compare others to the baseline example
● How to choose the baseline example
○ Last candidate
○ Kth percentile in ranking
○ A random candidate
○ Request by user (answering a question like: “Why was I presented candidate x above
candidate y?”)
198
201. Pros & Cons
● Explains potentially very complex models
● Case-by-case analysis
○ Why do you think candidate x is a better match for my position?
○ Why do you think I am a better fit for this job?
○ Why am I being shown this ad?
○ Great for debugging real-time problems in production
● Global view is missing
○ Aggregate Contributions can be computed
○ Could be costly to compute
201
202. Lessons Learned and Next Steps
● Global explanations vs. Case-by-case Explanations
○ Global gives an overview, better for making modeling decisions
○ Case-by-case could be more useful for the non-technical user, better for debugging
● Integrated gradients worked well for us
○ Complex models make it harder for developers to map improvement to effort
○ Use-case gave intuitive results, on top of completely describing score differences
● Next steps
○ Global explanations for Deep Models
202
209. Impact of Piecewise Approach
● Target sample 𝑥 𝑘=(𝑥 𝑘1, 𝑥 𝑘2, ⋯)
● Top feature contributor
○ LIME: large magnitude of 𝛽 𝑘𝑗 ⋅ 𝑥 𝑘𝑗
○ xLIME: large magnitude of 𝛽 𝑘𝑗
− ⋅ 𝑥 𝑘𝑗
● Top positive feature influencer
○ LIME: large magnitude of 𝛽 𝑘𝑗
○ xLIME: large magnitude of negative 𝛽 𝑘𝑗
− or positive 𝛽 𝑘𝑗
+
● Top negative feature influencer
○ LIME: large magnitude of 𝛽 𝑘𝑗
○ xLIME: large magnitude of positive 𝛽 𝑘𝑗
− or negative 𝛽 𝑘𝑗
+
209
210. Localized Stratified Sampling: Idea
Method: Sampling based on empirical distribution around target value at each feature level
210
211. Localized Stratified Sampling: Method
● Sampling based on empirical distribution around target value for each feature
● For target sample 𝑥 𝑘 = (𝑥 𝑘1 , 𝑥 𝑘2 , ⋯), sampling values of feature 𝑗 according to
𝑝𝑗 (𝑋𝑗) ⋅ 𝑁(𝑥 𝑘𝑗 , (𝛼 ⋅ 𝑠𝑗 )2)
○ 𝑝𝑗 (𝑋𝑗) : empirical distribution.
○ 𝑥 𝑘𝑗 : feature value in target sample.
○ 𝑠𝑗 : standard deviation.
○ 𝛼 : Interpretable range: tradeoff between interpretable coverage and local accuracy.
● In LIME, sampling according to 𝑁(𝑥𝑗 , 𝑠𝑗
2).
211
_
213. LTS LCP (LinkedIn Career Page) Upsell
● A subset of churn data
○ Total Companies: ~ 19K
○ Company features: 117
● Problem: Estimate whether there will be upsell given a set of features about
the company’s utility from the product
213
217. Key Takeaways
● Looking at the explanation as contributor vs. influencer features is useful
○ Contributor: Which features end-up in the current outcome case-by-case
○ Influencer: What needs to be done to improve likelihood, case-by-case
● xLIME aims to improve on LIME via:
○ Piecewise linear regression: More accurately describes local point, helps with finding correct
influencers
○ Localized stratified sampling: More realistic set of local points
● Better captures the important features
217
235. All your data
Any data warehouse
Custom Models
Fiddler Modeling Layer
Explainable AI for everyone
APIs, Dashboards, Reports, Trusted Insights
Fiddler’s Explainable AI Engine
Mission: Unlock Trust, Visibility and Insights by making AI Explainable in every enterprise
236. Credit Line Increase
Fair lending laws [ECOA, FCRA] require credit decisions to be explainable
Bank Credit Lending Model
Why? Why not? How?
? Request Denied
Query AI System
Credit Lending Score =
0.3
Example: Credit Lending in a black-box ML world
237. How Can This Help…
Customer Support
Why was a customer loan
rejected?
Bias & Fairness
How is my model doing
across demographics?
Lending LOB
What variables should they
validate with customers on
“borderline” decisions?
Explain individual predictions (using Shapley Values)
238. How Can This Help…
Customer Support
Why was a customer loan
rejected?
Bias & Fairness
How is my model doing
across demographics?
Lending LOB
What variables should they
validate with customers on
“borderline” decisions?
Explain individual predictions (using Shapley Values)
239. How Can This Help…
Customer Support
Why was a customer loan
rejected?
Bias & Fairness
How is my model doing
across demographics?
Lending LOB
What variables should they
validate with customers on
“borderline” decisions?
Explain individual predictions (using Shapley Values)
Probe the
model on
counterfactuals
240. How Can This Help…
Customer Support
Why was a customer loan
rejected?
Why was the credit card limit
low?
Why was this transaction
marked as fraud?
Integrating explanations
241. How Can This Help…
Global Explanations
What are the primary feature
drivers of the dataset on my
model?
Region Explanations
How does my model
perform on a certain slice?
Where does the model not
perform well? Is my model
uniformly fair across slices?
Slice & Explain
242. Model Monitoring: Feature Drift
Investigate Data Drift Impacting Model Performance
Time slice
Feature distribution for
time slice relative to
training distribution
243. How Can This Help…
Operations
Why are there outliers in
model predictions? What
caused model performance
to go awry?
Data Science
How can I improve my ML
model? Where does it not do
well?
Model Monitoring: Outliers with Explanations
Outlier
Individual
Explanations
244. Some lessons learned at Fiddler
● Attributions are contrastive to their baselines
● Explaining explanations is important (e.g. good UI)
● In practice, we face engineering challenges as much as
theoretical challenges
244
245. Recap
● Part I: Introduction and Motivation
○ Motivation, Definitions & Properties
○ Evaluation Protocols & Metrics
● Part II: Explanation in AI (not only Machine Learning!)
○ From Machine Learning to Knowledge Representation and Reasoning and Beyond
● Part III: Explainable Machine Learning (from a Machine Learning Perspective)
● Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective)
● Part V: XAI Tools on Applications, Lessons Learnt and Research Challenges
245
246. Challenges & Tradeoffs
246
User PrivacyTransparency
Fairness Performance
?
● Lack of standard interface for ML models
makes pluggable explanations hard
● Explanation needs vary depending on the type
of the user who needs it and also the problem
at hand.
● The algorithm you employ for explanations
might depend on the use-case, model type,
data format, etc.
● There are trade-offs w.r.t. Explainability,
Performance, Fairness, and Privacy.
247. Explainability in ML: Broad Challenges
Actionable explanations
Balance between explanations & model secrecy
Robustness of explanations to failure modes (Interaction between ML
components)
Application-specific challenges
Conversational AI systems: contextual explanations
Gradation of explanations
Tools for explanations across AI lifecycle
Pre & post-deployment for ML models
Model developer vs. End user focused
248. Thanks! Questions?
● Feedback most welcome :-)
○ freddy.lecue@inria.fr, krishna@fiddler.ai, sgeyik@linkedin.com,
kenthk@amazon.com, vamithal@linkedin.com, ankur@fiddler.ai,
luke@fiddler.ai, p.minervini@ucl.ac.uk, riccardo.guidotti@unipi.it
● Tutorial website: https://xaitutorial2020.github.io
● To try Fiddler, please send an email to info@fiddler.ai
● To try Thales XAI Platform , please send an email to freddy.lecue@thalesgroup.com
248https://xaitutorial2020.github.io 248