Successfully reported this slideshow.
Your SlideShare is downloading. ×

Rsqrd AI - Challenges in Deploying Explainable Machine Learning

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Loading in …3
×

Check these out next

1 of 45 Ad

Rsqrd AI - Challenges in Deploying Explainable Machine Learning

Download to read offline

In this talk, Umang Bhatt presents his work on understanding how explainability is used in the industry, research done in collaboration with Partnership on AI. Umang Bhatt is a Research Fellow at the Partnership on AI and a Ph.D. student in the Machine Learning Group at the University of Cambridge. His research interests lie in statistical machine learning, explainable artificial intelligence, and human-machine collaboration.

In this talk, Umang Bhatt presents his work on understanding how explainability is used in the industry, research done in collaboration with Partnership on AI. Umang Bhatt is a Research Fellow at the Partnership on AI and a Ph.D. student in the Machine Learning Group at the University of Cambridge. His research interests lie in statistical machine learning, explainable artificial intelligence, and human-machine collaboration.

Advertisement
Advertisement

More Related Content

Similar to Rsqrd AI - Challenges in Deploying Explainable Machine Learning (20)

Recently uploaded (20)

Advertisement

Rsqrd AI - Challenges in Deploying Explainable Machine Learning

  1. 1. Challenges in Deploying Explainable Machine Learning Umang Bhatt PhD Student, University of Cambridge Research Fellow, Partnership on AI Student Fellow, Leverhulme Center for the Future of Intelligence @umangsbhatt
  2. 2. This Talk 1. How are existing approaches to explainability used in practice? 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  3. 3. This Talk 1. How are existing approaches to explainability used in practice? 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  4. 4. Explainable Machine Learning in Deployment Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yuhuan Jia, Joydeep Ghosh, Ruchir Puri, José Moura, and Peter Eckersley Appeared at the ACM Conference on Fairness, Accountability, and Transparency 2020 https://arxiv.org/abs/1909.06342
  5. 5. Growth of Transparency Literature Many algorithms proposed to “explain” machine learning model output We study how organizations use these algorithms, if at all
  6. 6. Our Approach 30 minute to 2 hour semi-structured interviews 50 individuals from 30 organizations interviewed
  7. 7. Shared Language • Transparency: Providing stakeholders with relevant information about how the model works: this includes documentation of the training procedure, analysis of training data distribution, code releases, feature- level explanations, etc. • Explainability: Providing insights into a model’s behavior for specific datapoint(s)
  8. 8. Our Questions • What type of explanations have you used (e.g., feature-based, sample- based, counterfactual, or natural language)? • Who is the audience for the model explanation (e.g., research scientists, product managers, domain experts, or users)? • In what context have you deployed the explanations (e.g., informing the development process, informing human decision makers about the model, or informing the end user on how actions were taken based on the model’s output)?
  9. 9. Types of Explanations Feature Importance Sample Importance Counterfactuals
  10. 10. Stakeholders Executives Engineers End Users Regulators
  11. 11. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  12. 12. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  13. 13. Use Cases
  14. 14. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  15. 15. Establishing Explainability Goals 1 Identify stakeholders 2 Engage stakeholders W hatpurpose w ith the explanation serve? 3 Devise w orkflow How w illthe explanation be used in practice? W ho w illconsum e the explanation?
  16. 16. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  17. 17. Limitations • Spurious correlations exposed by feature level explanations • No causal underpinnings to the models themselves
  18. 18. Limitations (cont.) • Sample importance is computationally infeasible to deploy at scale • Privacy concerns of model inversion exist
  19. 19. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  20. 20. This Talk 1. How are existing approaches to explainability used in practice? 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  21. 21. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  22. 22. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  23. 23. You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods Botty Dimanov, Umang Bhatt, Mateja Jamnik, and Adrian Weller To appear at the European Conference on Artificial Intelligence 2020 http://ecai2020.eu/papers/72_paper.pdf
  24. 24. Takeaway Feature importance reveals nothing reliable about model fairness.
  25. 25. Why do we care? FeatureImportance Feature FeatureImportance Feature Model A Model B
  26. 26. Why do we care? FeatureImportance Feature FeatureImportance Feature Model A Model B Unfair Fair
  27. 27. Can we manipulate explanations? Modified explanations via adversarial perturbations of inputs • Ghorbani, Abid, and Zou. AAAI 2019 • Dombrowski et al. NeurIPS 2019 • Slack et al. AIES 2019 Control visual explanations via adversarial perturbations of parameters • Heo, Joo, and Moon. NeurIPS 2019 Downgrade explanations via adversarial perturbations of parameters to hide unfairness
  28. 28. fθ fθ+δ f : X ↦ Y Explanation g(f, x)j Our Setup Classifier Our Goal ∀i, fθ+δ(x(i) ) ≈ fθ(x(i) ) Desirable Properties Model Similarity ∀i, |g(fθ+δ, x(i) )j | ≪ |g(fθ, x(i) )j |Low Target Feature Attribution
  29. 29. Our Method argminδ L′ = L(fθ+δ, x, y) + α n ∇X:,j L(fθ+δ, x, y) p Adversarial Explanation Attack
  30. 30. Results (Importance Ranking) fθ fθ+δ
  31. 31. Results (Importance Ranking) fθ fθ+δ Our adversarial explanation attack: 1.Significantly decreases relative importance. 2.Generalizes to test points. 3.Transfers across explanation methods.
  32. 32. Findings • Little change in accuracy, but difference in outputs is detectable • Low attribution achieved with respect to multiple explanation methods • High unfairness across multiple fairness metrics (compared to holding feature constant) fθ fθ+δ
  33. 33. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  34. 34. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders?
  35. 35. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders?
  36. 36. Machine Learning Explainability for External Stakeholders Umang Bhatt, McKane Andrus, Adrian Weller, and Alice Xiang Convening hosted by CFI, PAI, and IBM Appeared at the ICML 2020 Workshop on Extending Explainable AI: Beyond Deep Models and Classifiers https://arxiv.org/abs/2007.05408
  37. 37. Overview • 33 participants from 5 countries • 15 ML experts, 3 designers, 6 legal experts, 9 policymakers • Domain expertise: Finance, Healthcare, Media, and Social Services • Goal: facilitate an inter-stakeholder conversation around explainable machine learning
  38. 38. Two Takeaways 1. There is a need for community engagement in the development of explainable machine learning. 2. There are nuances in the deployment of explainable machine learning.
  39. 39. Community Engagement 1. In which context will this explanation be used? Does the context change the properties of the explanations we expose? 2. How should the explanation be evaluated? Both quantitatively and qualitatively… 3. Can we prevent data misuse and preferential treatment by involving affected groups in the development process? 4. Can we educate external stakeholders (and data scientists) regarding the functionalities and limitations of explainable machine learning?
  40. 40. Deploying Explainability 1. How does uncertainty in the model and introduced by the (potentially approximate) explanation technique affect the resulting explanations? 2. How can stakeholders interact with the resulting explanations? Can explanations be a conduit for interacting with the model? 3. How, if at all, will stakeholder behavior change as a result of the explanation shown? 4. Over time, how will the explanation technique adapt to changes in stakeholder behavior?
  41. 41. Two Takeaways 1. There is a need for community engagement in the development of explainable machine learning. 2. There are nuances in the deployment of explainable machine learning.
  42. 42. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders?
  43. 43. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders? Community engagement and thoughtful deployment
  44. 44. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders? Community engagement and thoughtful deployment
  45. 45. Challenges in Deploying Explainable Machine Learning Umang Bhatt usb20@cam.ac.uk @umangsbhatt Thanks for listening!

×