Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Valencian Summer School in Machine Learning 2017 - Day 1
Lectures Review: Summary Day 1 Sessions. By Mercè Martín (BigML).
https://bigml.com/events/valencian-summer-school-in-machine-learning-2017
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
AI/ML Infra Meetup | ML explainability in MichelangeloAlluxio, Inc.
AI/ML Infra Meetup
May. 23, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Eric Wang (Software Engineer, @Uber)
Uber has numerous deep learning models, most of which are highly complex with many layers and a vast number of features. Understanding how these models work is challenging and demands significant resources to experiment with various training algorithms and feature sets. With ML explainability, the ML team aims to bring transparency to these models, helping to clarify their predictions and behavior. This transparency also assists the operations and legal teams in explaining the reasons behind specific prediction outcomes.
In this talk, Eric Wang will discuss the methods Uber used for explaining deep learning models and how we integrated these methods into the Uber AI Michelangelo ecosystem to support offline explaining.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Valencian Summer School in Machine Learning 2017 - Day 1
Lectures Review: Summary Day 1 Sessions. By Mercè Martín (BigML).
https://bigml.com/events/valencian-summer-school-in-machine-learning-2017
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
AI/ML Infra Meetup | ML explainability in MichelangeloAlluxio, Inc.
AI/ML Infra Meetup
May. 23, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Eric Wang (Software Engineer, @Uber)
Uber has numerous deep learning models, most of which are highly complex with many layers and a vast number of features. Understanding how these models work is challenging and demands significant resources to experiment with various training algorithms and feature sets. With ML explainability, the ML team aims to bring transparency to these models, helping to clarify their predictions and behavior. This transparency also assists the operations and legal teams in explaining the reasons behind specific prediction outcomes.
In this talk, Eric Wang will discuss the methods Uber used for explaining deep learning models and how we integrated these methods into the Uber AI Michelangelo ecosystem to support offline explaining.
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding why certains predictions are made are very important in assessing trust, which is very important if one plans to take action based on a prediction. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. If the user does not trust the model they will never use it .
The Power of Auto ML and How Does it WorkIvo Andreev
Automated ML is an approach to minimize the need of data science effort by enabling domain experts to build ML models without having deep knowledge of algorithms, mathematics or programming skills. The mechanism works by allowing end-users to simply provide data and the system automatically does the rest by determining approach to perform particular ML task. At first this may sound discouraging to those aiming to the “sexiest job of the 21st century” - the data scientists. However, Auto ML should be considered as democratization of ML, rather that automatic data science.
In this session we will talk about how Auto ML works, how is it implemented by Microsoft and how it could improve the productivity of even professional data scientists.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
Dataset: Gather a large dataset of laptops and their features, including processor speed, RAM, storage, and display size, along with their corresponding prices.
Feature engineering: Extracting meaningful features from the dataset, such as brand, model, and year, and transforming them into a format that machine learning algorithms can use.
Model selection: Choosing the most appropriate machine learning algorithm, such as linear regression, decision tree, or random forest, based on the type of data and desired level of accuracy.
Model training: Splitting the dataset into training and testing sets, and using the training data to train the machine learning model.
Model evaluation: Testing the model's performance on the testing data and evaluating its accuracy using metrics such as mean squared error or R-squared.
Hyperparameter tuning: Optimizing the model's hyperparameters, such as learning rate or regularization strength, to achieve the best performance.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
The Incredible Disappearing Data ScientistRebecca Bilbro
The last decade saw advances in compute power combine with an avalanche of open source software development, resulting in a revolution in machine learning and scalable analytics. “Data science” and “data product” are now household terms. This led to a new job description, the Data Scientist, which quickly became one of the most significant, exciting, and misunderstood jobs of the 21st century. One part statistician, one part computer scientist, and one part domain expert, data scientists seem poised to become the most pivotal value creators of the information age. And yet, danger (supposedly) lies ahead: human decisions are increasingly outsourced to algorithms of questionable ethical design; we’re putting everything on the blockchain; and perhaps most disturbingly, data science salaries are dropping precipitously as new graduates and Machine Learning as a Service (MLaaS) offerings flood the market. As we move into a future where predictive analytics is no longer a differentiator but instead a core business function, will data scientists proliferate or be automated out of a job?
In this talk, one humble data scientist attempts to cut through the hype to present an alternate vision of what data science is and can become. If not the “Sexiest Job of the 21st Century" as the Harvard Business Review once quipped, what is it like to be a workaday data scientist? What problems are we solving? How do we integrate with mature engineering teams? How do we engage with clients and product owners? How do we deploy non-deterministic models in production? In particular, we’ll examine critical integration points — technological and otherwise — we are currently tackling, which will ultimately determine our success, and our viability, over the next 10 years.
VSSML16 LR1. Summary Day 1
Valencian Summer School in Machine Learning 2016
Day 1
Summary Day 1
Mercè Martin (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2016
Machine Learning course in Chandigarh Joinasmeerana605
The machine learning process is iterative. Data collection and preparation are crucial. Feature engineering transforms raw data into meaningful representations. Model selection involves trying different algorithms. Training exposes the model to data and allows it to learn. We evaluate how well it performs on new data before finally deploying it for predictions.Join Machine Learning course in Chandigarh.
Interpretierbarkeit von ML-Modellen hat die Zielsetzung, die Ursachen einer Prognose offenzulegen und eine daraus abgeleitete Entscheidung für einen Menschen nachvollziehbar zu erklären. Durch die Nachvollziehbarkeit von Prognosen lässt sich beispielsweise sicherstellen, dass deren Herleitung konsistent zum Domänenwissen eines Experten ist. Auch ein unfairer Bias lässt sich durch die Erklärung aussagekräftiger Beispiele identifizieren.
Prognosemodelle lassen sich grob in intrinsisch interpretierbare Modelle und nicht-interpretierbare (auch Blackbox-) Modelle unterscheiden. Intrinsisch interpretierbare Modelle sind dafür bekannt, dass sie für einen Menschen leicht nachvollziehbar sind. Ein typisches Beispiel für ein solches Modell ist der Entscheidungsbaum, dessen regelbasierter Entscheidungsprozess intuitiv und leicht zugänglich ist. Im Gegensatz dazu gelten Neuronale Netze als Blackbox-Modelle, deren Prognosen durch die komplexe Netzstruktur schwer nachvollziehbar sind.
In diesem Talk erläuterte Marcel Spitzer das Konzept von Interpretierbarkeit im Kontext von Machine Learning und stellte gängige Verfahren zur Interpretation von Modellen vor. Besonderen Fokus legte er dabei auf modellunabhängige Verfahren, die sich auch auf prognosestarke Blackbox-Modelle anwenden lassen.
Event: M3 Minds Mastering Machines
Speaker: Marcel Spitzer
Blog-Artikel: https://www.inovex.de/blog/machine-learning-interpretability/
Mehr Tech-Vorträge: inovex.de/vortraege
Mehr Tech-Artikel: inovex.de/blog
Recommender systems revolutionized how the end users are experiencing complex platforms greatly enhancing overall user experience. In the past few years, we have witnessed a tremendous success of deep learning in many application domains such as computer vision, speech recognition and natural language processing. Recent advancements in deep learning based recommender system offer new modeling paradigms, overcoming obstacles of conventional modeling techniques and achieving even higher recommendation results
Deep learning has accomplished impressive feats in areas such as voice recognition, image processing, and natural language processing. Deep learning enthusiasts have rushed to predict that this family of algorithms is likely to take over most other applications in the near future. This focus on deep architectures seems to have cast a shadow over more “traditional” machine learning and data science approaches, leaving researchers and practitioners alike wondering whether there is any point in investing in feature engineering or simpler models.
In this talk, I will go over what deep learning can and cannot do for you, both now and in the near future. I will also describe how different approaches will continue to be needed, and why their demand will likely grow despite the rise of deep learning. I will support my claims not only by looking at recent publications, but also by using practical examples drawn from my experience at companies at the forefront of machine learning applications, such as Quora.
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding why certains predictions are made are very important in assessing trust, which is very important if one plans to take action based on a prediction. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. If the user does not trust the model they will never use it .
The Power of Auto ML and How Does it WorkIvo Andreev
Automated ML is an approach to minimize the need of data science effort by enabling domain experts to build ML models without having deep knowledge of algorithms, mathematics or programming skills. The mechanism works by allowing end-users to simply provide data and the system automatically does the rest by determining approach to perform particular ML task. At first this may sound discouraging to those aiming to the “sexiest job of the 21st century” - the data scientists. However, Auto ML should be considered as democratization of ML, rather that automatic data science.
In this session we will talk about how Auto ML works, how is it implemented by Microsoft and how it could improve the productivity of even professional data scientists.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
Dataset: Gather a large dataset of laptops and their features, including processor speed, RAM, storage, and display size, along with their corresponding prices.
Feature engineering: Extracting meaningful features from the dataset, such as brand, model, and year, and transforming them into a format that machine learning algorithms can use.
Model selection: Choosing the most appropriate machine learning algorithm, such as linear regression, decision tree, or random forest, based on the type of data and desired level of accuracy.
Model training: Splitting the dataset into training and testing sets, and using the training data to train the machine learning model.
Model evaluation: Testing the model's performance on the testing data and evaluating its accuracy using metrics such as mean squared error or R-squared.
Hyperparameter tuning: Optimizing the model's hyperparameters, such as learning rate or regularization strength, to achieve the best performance.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
The Incredible Disappearing Data ScientistRebecca Bilbro
The last decade saw advances in compute power combine with an avalanche of open source software development, resulting in a revolution in machine learning and scalable analytics. “Data science” and “data product” are now household terms. This led to a new job description, the Data Scientist, which quickly became one of the most significant, exciting, and misunderstood jobs of the 21st century. One part statistician, one part computer scientist, and one part domain expert, data scientists seem poised to become the most pivotal value creators of the information age. And yet, danger (supposedly) lies ahead: human decisions are increasingly outsourced to algorithms of questionable ethical design; we’re putting everything on the blockchain; and perhaps most disturbingly, data science salaries are dropping precipitously as new graduates and Machine Learning as a Service (MLaaS) offerings flood the market. As we move into a future where predictive analytics is no longer a differentiator but instead a core business function, will data scientists proliferate or be automated out of a job?
In this talk, one humble data scientist attempts to cut through the hype to present an alternate vision of what data science is and can become. If not the “Sexiest Job of the 21st Century" as the Harvard Business Review once quipped, what is it like to be a workaday data scientist? What problems are we solving? How do we integrate with mature engineering teams? How do we engage with clients and product owners? How do we deploy non-deterministic models in production? In particular, we’ll examine critical integration points — technological and otherwise — we are currently tackling, which will ultimately determine our success, and our viability, over the next 10 years.
VSSML16 LR1. Summary Day 1
Valencian Summer School in Machine Learning 2016
Day 1
Summary Day 1
Mercè Martin (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2016
Machine Learning course in Chandigarh Joinasmeerana605
The machine learning process is iterative. Data collection and preparation are crucial. Feature engineering transforms raw data into meaningful representations. Model selection involves trying different algorithms. Training exposes the model to data and allows it to learn. We evaluate how well it performs on new data before finally deploying it for predictions.Join Machine Learning course in Chandigarh.
Interpretierbarkeit von ML-Modellen hat die Zielsetzung, die Ursachen einer Prognose offenzulegen und eine daraus abgeleitete Entscheidung für einen Menschen nachvollziehbar zu erklären. Durch die Nachvollziehbarkeit von Prognosen lässt sich beispielsweise sicherstellen, dass deren Herleitung konsistent zum Domänenwissen eines Experten ist. Auch ein unfairer Bias lässt sich durch die Erklärung aussagekräftiger Beispiele identifizieren.
Prognosemodelle lassen sich grob in intrinsisch interpretierbare Modelle und nicht-interpretierbare (auch Blackbox-) Modelle unterscheiden. Intrinsisch interpretierbare Modelle sind dafür bekannt, dass sie für einen Menschen leicht nachvollziehbar sind. Ein typisches Beispiel für ein solches Modell ist der Entscheidungsbaum, dessen regelbasierter Entscheidungsprozess intuitiv und leicht zugänglich ist. Im Gegensatz dazu gelten Neuronale Netze als Blackbox-Modelle, deren Prognosen durch die komplexe Netzstruktur schwer nachvollziehbar sind.
In diesem Talk erläuterte Marcel Spitzer das Konzept von Interpretierbarkeit im Kontext von Machine Learning und stellte gängige Verfahren zur Interpretation von Modellen vor. Besonderen Fokus legte er dabei auf modellunabhängige Verfahren, die sich auch auf prognosestarke Blackbox-Modelle anwenden lassen.
Event: M3 Minds Mastering Machines
Speaker: Marcel Spitzer
Blog-Artikel: https://www.inovex.de/blog/machine-learning-interpretability/
Mehr Tech-Vorträge: inovex.de/vortraege
Mehr Tech-Artikel: inovex.de/blog
Recommender systems revolutionized how the end users are experiencing complex platforms greatly enhancing overall user experience. In the past few years, we have witnessed a tremendous success of deep learning in many application domains such as computer vision, speech recognition and natural language processing. Recent advancements in deep learning based recommender system offer new modeling paradigms, overcoming obstacles of conventional modeling techniques and achieving even higher recommendation results
Deep learning has accomplished impressive feats in areas such as voice recognition, image processing, and natural language processing. Deep learning enthusiasts have rushed to predict that this family of algorithms is likely to take over most other applications in the near future. This focus on deep architectures seems to have cast a shadow over more “traditional” machine learning and data science approaches, leaving researchers and practitioners alike wondering whether there is any point in investing in feature engineering or simpler models.
In this talk, I will go over what deep learning can and cannot do for you, both now and in the near future. I will also describe how different approaches will continue to be needed, and why their demand will likely grow despite the rise of deep learning. I will support my claims not only by looking at recent publications, but also by using practical examples drawn from my experience at companies at the forefront of machine learning applications, such as Quora.
Week 4 advanced labeling, augmentation and data preprocessingAjay Taneja
This is the Machine Learning Engineering in Production Course notes. This is the Week 4 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
This is the Machine Learning Engineering in Production Course notes. This is the Week 3 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
Machine Learning Data Life Cycle in Production (Week 2 feature engineering...Ajay Taneja
This is the Machine Learning Engineering in Production Course notes. This is the Week 2 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
Course 2 Machine Learning Data LifeCycle in Production - Week 1Ajay Taneja
This is the Machine Learning Engineering in Production Course notes. This is the Week 1 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
This presentation goes into the details of word embeddings, applications, learning word embeddings through shallow neural network , Continuous Bag of Words Model.
NIDM (National Institute Of Digital Marketing) Bangalore Is One Of The Leading & best Digital Marketing Institute In Bangalore, India And We Have Brand Value For The Quality Of Education Which We Provide.
www.nidmindia.com
Want to move your career forward? Looking to build your leadership skills while helping others learn, grow, and improve their skills? Seeking someone who can guide you in achieving these goals?
You can accomplish this through a mentoring partnership. Learn more about the PMISSC Mentoring Program, where you’ll discover the incredible benefits of becoming a mentor or mentee. This program is designed to foster professional growth, enhance skills, and build a strong network within the project management community. Whether you're looking to share your expertise or seeking guidance to advance your career, the PMI Mentoring Program offers valuable opportunities for personal and professional development.
Watch this to learn:
* Overview of the PMISSC Mentoring Program: Mission, vision, and objectives.
* Benefits for Volunteer Mentors: Professional development, networking, personal satisfaction, and recognition.
* Advantages for Mentees: Career advancement, skill development, networking, and confidence building.
* Program Structure and Expectations: Mentor-mentee matching process, program phases, and time commitment.
* Success Stories and Testimonials: Inspiring examples from past participants.
* How to Get Involved: Steps to participate and resources available for support throughout the program.
Learn how you can make a difference in the project management community and take the next step in your professional journey.
About Hector Del Castillo
Hector is VP of Professional Development at the PMI Silver Spring Chapter, and CEO of Bold PM. He's a mid-market growth product executive and changemaker. He works with mid-market product-driven software executives to solve their biggest growth problems. He scales product growth, optimizes ops and builds loyal customers. He has reduced customer churn 33%, and boosted sales 47% for clients. He makes a significant impact by building and launching world-changing AI-powered products. If you're looking for an engaging and inspiring speaker to spark creativity and innovation within your organization, set up an appointment to discuss your specific needs and identify a suitable topic to inspire your audience at your next corporate conference, symposium, executive summit, or planning retreat.
About PMI Silver Spring Chapter
We are a branch of the Project Management Institute. We offer a platform for project management professionals in Silver Spring, MD, and the DC/Baltimore metro area. Monthly meetings facilitate networking, knowledge sharing, and professional development. For event details, visit pmissc.org.
This comprehensive program covers essential aspects of performance marketing, growth strategies, and tactics, such as search engine optimization (SEO), pay-per-click (PPC) advertising, content marketing, social media marketing, and more
Exploring Career Paths in Cybersecurity for Technical CommunicatorsBen Woelk, CISSP, CPTC
Brief overview of career options in cybersecurity for technical communicators. Includes discussion of my career path, certification options, NICE and NIST resources.
Exploring Career Paths in Cybersecurity for Technical Communicators
C3 w5
1. Copyright Notice
These slides are distributed under the Creative Commons License.
DeepLearning.AI makes these slides available for educational purposes. You may not
use or distribute these slides for commercial purposes. You may make copies of these
slides and use or distribute them for educational purposes as long as you
cite DeepLearning.AI as the source of the slides.
For the rest of the details of the license, see
https://creativecommons.org/licenses/by-sa/2.0/legalcode
4. ● Development of AI is creating new opportunities to improve lives of people
● Also raises new questions about the best way to build the following into AI systems:
● Ensure working
towards systems that
are fair and inclusive
to all users.
● Explainability helps
ensure fairness.
Fairness
Responsible AI
Training models using
sensitive data needs
privacy preserving
safeguards.
Privacy
Identifying potential
threats can help keep
AI systems safe and
secure.
Security
● Understanding how
and why ML models
make certain
predictions.
● Explainability helps
ensure fairness.
Explainability
5. The field of XAI allow ML system to be more transparent, providing
explanations of their decisions in some level of detail.
These explanations are important:
To ensure algorithmic fairness.
Identify potential bias and problems in training data.
To ensure algorithms/models work as expected.
Explainable Artificial Intelligence (XAI)
6. Need for Explainability in AI
2. Attacks
3. Fairness
4. Reputation and Branding
6. Customers and other stakeholders may question or challenge model decisions
5. Legal and regulatory concerns
1. Models with high sensitivity, including natural language networks, can generate
wildly wrong results
7. DNNs can be fooled into misclassifying inputs with no resemblance to the true category.
Deep Neural Networks (DNNs) can be fooled
10. “(Models) are interpretable if their operations
can be understood by a human, either through
introspection or through a produced explanation.”
“Explanation and justification in machine learning: A survey”
- O. Biran, C. Cotton
What is interpretability?
11. You should be
able to query
the model to
understand:
Why did the model behave in a certain way?
How can we trust the predictions made by the model?
What information can model provide to avoid prediction
errors?
What are the requirements?
14. Intrinsic or Post-Hoc?
● Post-hoc methods treat models as black boxes
● Agnostic to model architecture
● Extracts relationships between features and model predictions,
agnostic of model architecture
● Applied after training
16. ● These tools are limited to specific model classes
● Example: Interpretation of regression weights in linear models
● Intrinsically interpretable model techniques are model specific
● Tools designed for particular model architectures
Model Specific Data Model
Prediction
Explanation
● Applied to any model after it is trained
● Do not have access to the internals of the model
● Work by analyzing feature input and output pairs
Model Agnostic Data
Prediction
Explanation
model
magic
Model Specific or Model Agnostic
18. Local or Global?
● Local: interpretation method explains an individual prediction.
● Feature attribution is identification of relevant features as an
explanation for a model.
19. Local or Global?
● Global: interpretation method
explains entire model behaviour
● Feature attribution summary for
the entire test data set
21. Intrinsically Interpretable Models
● How the model works is self evident
● Many classic models are highly interpretable
● Neural networks look like “black boxes”
● Newer architectures focus on designing for interpretability
23. Algorithm Linear Monotonic Feature
Interaction
Task
Linear regression Yes Yes No regr
Logistic regression No Yes No class
Decision trees No Some Yes class, regr
RuleFit Yes* No Yes class, regr
K-nearest neighbors No No No class, regr
TF Lattice Yes* Yes Yes class, regr
Interpretable Models
24. Neural Networks
SVMs
Random Forests
K-nearest neighbours
Decision Trees
Linear Regression
Accuracy
Interpretability
Interpretability vs Accuracy Trade off
TF Lattice
Model Architecture Influence on Interpretability
26. Linear models have easy to understand interpretation from weights
Interpretation from Weights
● Numerical features: Increase of one unit in a feature increases
prediction by the value of corresponding weight.
● Binary features: Changing between 0 or 1 category changes the
prediction by value of the feature’s weight.
● Categorical features: one hot encoding affects only one weight.
27. Feature Importance
● Relevance of a given feature to generate model results
● Calculation is model dependent
● Example: linear regression model, t-statistic
28. More advanced models: TensorFlow Lattice
● Overlaps a grid onto the feature
space and learns values for the
output at the vertices of the
grid
● Linearly interpolates from the
lattice values surrounding a
point
29. More advanced models: TensorFlow Lattice
● Enables you to inject domain
knowledge into the learning
process through common-sense
or policy-driven shape
constraints
● Set constraints such as
monotonicity, convexity, and how
features interact
31. TensorFlow Lattice: Issues
Dimensionality
● The number of parameters of a lattice layer increases exponentially
with the number of input features
● Very Rough Rule: Less than 20 features ok without ensembling
33. These methods separate explanations from the machine learning model.
Model Agnostic Methods
Desired characteristics:
● Model flexibility
● Explanation flexibility
● Representation flexibility
34. Partial Dependence Plots Individual Conditional Expectation
Accumulated Local Effects Permutation Feature Importance
Permutation Feature Importance Global Surrogate
Local Surrogate (LIME) Shapley Values
SHAP
Model Agnostic Methods
36. Partial Dependence Plots (PDP)
A partial dependence plot shows:
● The marginal effect one or two features have on the model result
● Whether the relationship between the targets and the feature is
linear, monotonic, or more complex
37. The partial function fxs
is estimated by calculating averages in the training data:
Partial Dependence Plots
38. PDP plots for a linear regression
model trained on a bike rentals
dataset to predict the number of
bikes rented
Partial Dependence Plots: Examples
40. ● Computation is intuitive
● If the feature whose PDP is calculated has no feature correlations, PDP
perfectly represents how feature influences the prediction on average
● Easy to implement
Advantages of PDP
41. Disadvantages of PDP
● Realistic maximum number of features in PDP is 2
● PDP assumes that feature values have no interactions
43. Permutation Feature Importance
Feature importance measures the increase in prediction error after
permuting the features
Feature is important if:
● Shuffling its values increases model error
Feature is unimportant if:
● Shuffling its values leaves model error unchanged
44. ● Estimate the original model error
● For each feature:
○ Permute the feature values in the data to break its association with
the true outcome
○ Estimate error based on the predictions of the permuted data
○ Calculate permutation feature importance
○ Sort features by descending feature importance .
Permutation Feature Importance
45. pretation: Shows the increase in model error
feature's information is destroyed.
Advantages of Permutation Feature Importance
● Nice interpretation: Shows the increase in model error when the
feature's information is destroyed.
● Provides global insight to model’s behaviour
● Does not require retraining of model
46. Disadvantages of Permutation Feature Importance
● It is unclear if testing or training data should be used for visualization
● Can be biased since it can create unlikely feature combinations in case
of strongly correlated features
● You need access to the labeled data
48. ● The Shapley value is a method for assigning payouts to players
depending on their contribution to the total
● Applying that to ML we define that:
○ Feature is a “player” in a game
○ Prediction is the “payout”
○ Shapley value tells us how the “payout” (feature contribution)
can be distributed among features
Shapley Value
49. 50m2
2nd floor
€300,000
Suppose you trained an ML
model to predict apartment
prices
You need to explain why the
model predicts €300,000 for a
certain apartment.
Average prediction of all
apartments: €310,000.
Shapley Value: Example
50. Term in Game Theory Relation to ML
Relation to
House Prices Example
Game
Prediction task for
single instance of dataset
Prediction of house prices
for a single instance
Gain
Actual prediction for instance -
Average prediction for all
instances
Prediction for house price (€300,000) -
Average Prediction(€310,000) =
-€10,000
Players
Feature values that contribute
to prediction
‘Park=nearby’, ‘cat=banned’,
‘area=50m2
’, ‘floor=2nd’
Shapley Value
51. Feature Contribution
‘park-nearby’ €30,000
size-50 €10,000
floor-2nd €0
cat-banned -€50,000
Total: -€10,000 (Final prediction - Average Prediction)
One possible
explanation
Shapley Value
Goal :
Explain the difference between the actual prediction (€300,000) and the average prediction
(€310,000): a difference of -€10,000.
52. Based on solid theoretical foundation.
Satisfies Efficiency, Symmetry, Dummy, and Additivity properties
Enables contrastive explanations
Value is fairly distributed among all features
Advantages of Shapley Values
53. ● Computationally expensive
● Can be easily misinterpreted
● Always uses all the features, so not good for explanations of only a few
features.
● No prediction model. Can’t be used for “what if” hypothesis testing.
● Does not work well when features are correlated
Disadvantages of Shapley Values
55. ● SHAP (SHapley Additive exPlanations) is a framework for Shapley Values which
assigns each feature an importance value for a particular prediction
SHAP
● Includes extensions for:
○ TreeExplainer: high-speed exact algorithm for tree ensembles
○ DeepExplainer: high-speed approximation algorithm for SHAP values
in deep learning models
○ GradientExplainer: combines ideas from Integrated Gradients, SHAP,
and SmoothGrad into a single expected value equation
○ KernelExplainer: uses a specially-weighted local linear regression to
estimate SHAP values for any model
56. SHAP Explanation Force Plots
● Shapley Values can be visualized as forces
● Prediction starts from the baseline (Average of all predictions)
● Each feature value is a force that increases (red) or decreases (blue) the
prediction
60. Testing Concept Activation Vectors (TCAV)
Concept Activation Vectors (CAVs)
● A neural network’s internal state in terms of human-friendly concepts
● Defined using examples which show the concept
63. Local Interpretable Model-agnostic Explanations (LIME)
● Implements local surrogate models - interpretable models that are used
to explain individual predictions
● Using data points close to the individual prediction, LIME trains an
interpretable model to approximate the predictions of the real model
● The new interpretable model is then used to interpret the real result
65. Explain why an individual data point received that
prediction
Debug odd behavior from a model
Refine a model or data collection process
Verify that the model’s behavior is acceptable
Present the gist of the model
Google Cloud AI Explanations for AI Platform
69. AI Explanations: Integrated Gradients
A gradients-based method to efficiently compute feature
attributions with the same axiomatic properties as Shapley
values
70. AI Explanations: XRAI (eXplanation with Ranked Area
Integrals)
XRAI assesses overlapping regions of the image to create a saliency map
● Highlights relevant regions of the image rather than pixels
● Aggregates the pixel-level attribution within each segment and ranks
the segments