ML Explainability in
Michelangelo
Eric Wang
Michelangelo - Uber ML Platform Team
01 Challenges and Needs
02 Importance of ML explainability
03 Explainers
04 Architecture
05 User workflow and case studies
06 Future opportunities and Q&A
Contents
Challenges and Needs
Needs
- Understand and interpret how models make decisions.
- Provide transparency and understanding for ML practitioners
and stakeholders
- Model exploration and understanding
- Efficiency of their features
Why this is important
● Uber operates ~1000 ML pipelines
● No offer of feature importance insight for DL models
● Time consuming to explore features by training new models
(Efficiency/Resource)
Challenges and Needs
Model performance
Feature null rate monitoring
Michelangelo provided visual interfaces for:
Importance of Model Explainability
model_1
score
model_2
score
… ...
better
model?
better score?
??
Summary stats (AUC, MAE…) are informative, but not instructive for debugging
Questions:
1. Some features drift/quality
changes, are they important
enough?
2. Why two models performs
differently which features
drive more to the outcome?
3. Provide explanations for
operations and legals.
User’s Request
Making models more transparent and
interpretable
Needs to implement Explainable AI (XAI) for their Keras model to provide
clear explanations for model decisions, Users are investigating replacing a
formulaic model with the DNN model. Obviously the formulaic model is
more interpretable, Hence the team is looking for the DNN model to roughly
be explained by the same features.
Needs to provide explanation for business owners
regarding how a feed is promoted, this is also involved in
understanding how the decision is made for legal and
marketing teams.
A need to provide explanation for DL model in the online
prediction which is the same process for existing XGBoost
model. This requires us to develop a solution that can
integrate in Training so we can have baselines for the
explainer during realtime.
“
“
“
”
”
”
Importance of Model Explainability
Model debugging in Michelangelo: to make the 80% effort more efficient and effective.
Explainability in model debugging: Transparency and Trust, Feature importance
analysis,, Comparison
ML is a widely used technology for Uber’s business
However, developing successful models is a long and non-trivial process
80/20 rule in machine learning: 20% of percent effort building the initial working
model, 80% effort to improve its performance to the ideal level.
From https://cornellius.substack.com/p/pareto-principle-in-machine-learning
Explanation methods
TreeShap
Interactive tree ensemble model visualizer on frontend
Data source: any serialized tree model (Spark GBDT, XGBoost, ...)
KernelShap
The good
Model Agnostic
Local explanation support
Captures Feature Interactions
Comprehensive Explanations
The bad
Computational Complexity
Scalability Issues
Independence Assumption
Integrated gradients
Why?
● Gradient based with baselines comparison
● a popular interpretability technique to any
differentiable model (e.g. images, text,
structured data)
● Scalable with large computation needs
● Many machine learning libraries
(TensorFlow, PyTorch) provide
implementations of IG.
Integrated gradients
Benefits
● Completeness
● Interpretability
● Feature dependency
agnostic
● Efficiency
Feature values
Predicted score
Average prediction
Effect of feature on prediction:
0 + 0.17 +0.06 - 0.06 - 0.07 - 0.08 + 0.09 - 0.1 - 0.1 - 0.13 - 0.36 = -0.58 ~ -0.6
Integrated gradients
Notes
● Flatten Input features
● Choose the right layers (especially with categorical features)
● Use model wrapper to aggregate all outputs if possible
Using integration gradients
Model, model and model
Explainer
Model packaging for serving
Basis Feature
set
Feature joins
Prediction
(serving model)
Aggregated
Feature set
Feature
transformation
Decision
threshold
Post
processing
Not a raw model!
Using integration gradients
Save DL model separately
Explainer
Train Serving model
Raw model
Deploy to
endpoint
[keras.model, torch.nn.model, lightning…]
[torchscript, tf.compat.v1…]
Using integration gradients
Flatten input features
- Entity - an Uber business entity
such as city, rider, and driver (ex:
city, rider, driver, store)
- Feature Group - a feature group
for a given entity maps to a Hive
table and has features that are
related and convenient to
compute together
Entity
Feature
Group 1
Feature
Group 2
Feature
Group 3
Feature 1 Feature 2 Feature 3
Importance
level
Using integration gradients
Flatten input features
Entity
Feature
Group 1
Feature
Group 2
Feature
Group 3
Feature 1 Feature 2 Feature 3
Input to
model
Vectorized Bucketized
Vectorized
Feature 4
Importance
level
Using integration gradients
Flatten input features
Feature 1 Feature 2 Feature 3
Input to
model
Vectorized Bucketized
Vectorized
Feature 4
Using integration gradients
Choose the right layers
● Support both pyTorch
and Keras
● Support gradients on
input or output
● Ideal to pick the layer
for categorical features
Explainer
Using integration gradients
Use model wrapper to aggregate all outputs if possible
Model prediction pipelines
Basis Feature
set
Feature joins Prediction
Aggregated
Feature set
Feature
transformation
Decision
threshold
Post
processing
Using integration gradients
Use model wrapper to aggregate all outputs if possible
Model prediction pipelines
Basis
Feature set
Feature joins Prediction
Aggregate
d Feature
set
Feature
transformation
Decision
threshold
Post
processing
Calibrated
ML explainer in Michelangelo
Notebooks
1. Model debugging
2. Feature importance comparing
3. Visualization
Enabled for users
1. Different explainers (IG, TreeShap, KernelShap, etc)
2. Data conversion among different formats
3. Plotting
4. Model wrapper for calibration
● Backed in intuitive notions of what a good explanation
● Allows for both local and global reasoning, and it is
● Model agnostic
● Good adoption from popular explanation techniques
Visualize using Shapely value
Feature_0
Feature_1
Feature_2
Feature_3
Feature_4
Feature_5
Feature_6
ML explainer in Michelangelo
Generate feature importance in training pipeline
Model training pipelines
Basis Feature
set
Feature joins Trainer
Aggregated
Feature set
Feature
transformation
Explainer Packaging
ML explainer in Michelangelo
Monitoring Pipelines
1. Generate features importance during training time
2. Different thresholds based on importance
3. Reduce noise from feature quality null rate
Case Studies
Case 1. Identifying Useful Features
suburb Non-suburb
compare
Scenario
- A team at Uber is evaluating the order
conversion rate is very different
between suburb and non-suburb areas.
- Adding new features did not change
model’s overall performance
- What feature affect the most between
the different datasets.
Method: compared different datasets in the
same model
Findings: The location feature is more
important than engagement features such as
historical orders in the non-suburb dataset
Conclusion: should zoom-in the location
feature more to make it bit more accurate.
Smaller hexagon size helps
Scenario
- A team at Uber is evaluating the order
conversion rate is very different
between suburb and non-suburb areas.
- Adding new features did not change
model’s overall performance
- What feature affect the most between
the different datasets.
Method: compared different datasets in the
same model
Findings: The location feature is more
important than engagement features such as
historical orders in the non-suburb dataset
Conclusion: should zoom-in the location
feature more to make it bit more accurate.
Smaller hexagon size helps
Case Studies
Case 1. Identifying Useful Features
Scenario
A team want to see what photos the model
predicted incorrectly. Since our action involves
a cost associated with a wrong prediction
Method:
● generate importance for all low
prediction score features
● generate our features by calling the
label/object detection models from
external
Findings: Some object names not categorized
properly
Conclusion:
Created one hot encoded features from
dropped objects
Created features that look at whether the string
contains certain words
Case Studies
Case 2. Identifying false positive/negative
Architecture
XAI framework
Architecture
Components:
1. Data processing
a. Converting from pySpark to numpy
b. Feature flatten for calculating gradients
2. Explainer
a. Support multiple explainers (TreeShap/Kernel/IG…)
3. Model wrapper
a. Support different model caller function or forward
function. (keyword based or array based)
b. Support calibration and aggregation or specific output
layer
4. Importance aggregation
a. Aggregate importance from multiple dimensions
b. Feature mapping from output to input.
XAI framework
● Support LLM explanation in prompting engineer
● Feature selection assistant
● Interactive visualization tools
Future opportunities
Q&A
AI/ML Infra Meetup | ML explainability in Michelangelo

AI/ML Infra Meetup | ML explainability in Michelangelo

  • 1.
    ML Explainability in Michelangelo EricWang Michelangelo - Uber ML Platform Team
  • 2.
    01 Challenges andNeeds 02 Importance of ML explainability 03 Explainers 04 Architecture 05 User workflow and case studies 06 Future opportunities and Q&A Contents
  • 3.
    Challenges and Needs Needs -Understand and interpret how models make decisions. - Provide transparency and understanding for ML practitioners and stakeholders - Model exploration and understanding - Efficiency of their features Why this is important ● Uber operates ~1000 ML pipelines ● No offer of feature importance insight for DL models ● Time consuming to explore features by training new models (Efficiency/Resource)
  • 4.
    Challenges and Needs Modelperformance Feature null rate monitoring Michelangelo provided visual interfaces for:
  • 5.
    Importance of ModelExplainability model_1 score model_2 score … ... better model? better score? ?? Summary stats (AUC, MAE…) are informative, but not instructive for debugging Questions: 1. Some features drift/quality changes, are they important enough? 2. Why two models performs differently which features drive more to the outcome? 3. Provide explanations for operations and legals.
  • 6.
    User’s Request Making modelsmore transparent and interpretable Needs to implement Explainable AI (XAI) for their Keras model to provide clear explanations for model decisions, Users are investigating replacing a formulaic model with the DNN model. Obviously the formulaic model is more interpretable, Hence the team is looking for the DNN model to roughly be explained by the same features. Needs to provide explanation for business owners regarding how a feed is promoted, this is also involved in understanding how the decision is made for legal and marketing teams. A need to provide explanation for DL model in the online prediction which is the same process for existing XGBoost model. This requires us to develop a solution that can integrate in Training so we can have baselines for the explainer during realtime. “ “ “ ” ” ”
  • 7.
    Importance of ModelExplainability Model debugging in Michelangelo: to make the 80% effort more efficient and effective. Explainability in model debugging: Transparency and Trust, Feature importance analysis,, Comparison ML is a widely used technology for Uber’s business However, developing successful models is a long and non-trivial process 80/20 rule in machine learning: 20% of percent effort building the initial working model, 80% effort to improve its performance to the ideal level. From https://cornellius.substack.com/p/pareto-principle-in-machine-learning
  • 8.
  • 9.
    TreeShap Interactive tree ensemblemodel visualizer on frontend Data source: any serialized tree model (Spark GBDT, XGBoost, ...)
  • 10.
    KernelShap The good Model Agnostic Localexplanation support Captures Feature Interactions Comprehensive Explanations The bad Computational Complexity Scalability Issues Independence Assumption
  • 11.
    Integrated gradients Why? ● Gradientbased with baselines comparison ● a popular interpretability technique to any differentiable model (e.g. images, text, structured data) ● Scalable with large computation needs ● Many machine learning libraries (TensorFlow, PyTorch) provide implementations of IG.
  • 12.
    Integrated gradients Benefits ● Completeness ●Interpretability ● Feature dependency agnostic ● Efficiency Feature values Predicted score Average prediction Effect of feature on prediction: 0 + 0.17 +0.06 - 0.06 - 0.07 - 0.08 + 0.09 - 0.1 - 0.1 - 0.13 - 0.36 = -0.58 ~ -0.6
  • 13.
    Integrated gradients Notes ● FlattenInput features ● Choose the right layers (especially with categorical features) ● Use model wrapper to aggregate all outputs if possible
  • 14.
    Using integration gradients Model,model and model Explainer Model packaging for serving Basis Feature set Feature joins Prediction (serving model) Aggregated Feature set Feature transformation Decision threshold Post processing Not a raw model!
  • 15.
    Using integration gradients SaveDL model separately Explainer Train Serving model Raw model Deploy to endpoint [keras.model, torch.nn.model, lightning…] [torchscript, tf.compat.v1…]
  • 16.
    Using integration gradients Flatteninput features - Entity - an Uber business entity such as city, rider, and driver (ex: city, rider, driver, store) - Feature Group - a feature group for a given entity maps to a Hive table and has features that are related and convenient to compute together Entity Feature Group 1 Feature Group 2 Feature Group 3 Feature 1 Feature 2 Feature 3
  • 17.
    Importance level Using integration gradients Flatteninput features Entity Feature Group 1 Feature Group 2 Feature Group 3 Feature 1 Feature 2 Feature 3 Input to model Vectorized Bucketized Vectorized Feature 4
  • 18.
    Importance level Using integration gradients Flatteninput features Feature 1 Feature 2 Feature 3 Input to model Vectorized Bucketized Vectorized Feature 4
  • 19.
    Using integration gradients Choosethe right layers ● Support both pyTorch and Keras ● Support gradients on input or output ● Ideal to pick the layer for categorical features
  • 20.
    Explainer Using integration gradients Usemodel wrapper to aggregate all outputs if possible Model prediction pipelines Basis Feature set Feature joins Prediction Aggregated Feature set Feature transformation Decision threshold Post processing
  • 21.
    Using integration gradients Usemodel wrapper to aggregate all outputs if possible Model prediction pipelines Basis Feature set Feature joins Prediction Aggregate d Feature set Feature transformation Decision threshold Post processing Calibrated
  • 22.
    ML explainer inMichelangelo Notebooks 1. Model debugging 2. Feature importance comparing 3. Visualization Enabled for users 1. Different explainers (IG, TreeShap, KernelShap, etc) 2. Data conversion among different formats 3. Plotting 4. Model wrapper for calibration
  • 23.
    ● Backed inintuitive notions of what a good explanation ● Allows for both local and global reasoning, and it is ● Model agnostic ● Good adoption from popular explanation techniques Visualize using Shapely value Feature_0 Feature_1 Feature_2 Feature_3 Feature_4 Feature_5 Feature_6
  • 24.
    ML explainer inMichelangelo Generate feature importance in training pipeline Model training pipelines Basis Feature set Feature joins Trainer Aggregated Feature set Feature transformation Explainer Packaging
  • 25.
    ML explainer inMichelangelo Monitoring Pipelines 1. Generate features importance during training time 2. Different thresholds based on importance 3. Reduce noise from feature quality null rate
  • 26.
    Case Studies Case 1.Identifying Useful Features suburb Non-suburb compare Scenario - A team at Uber is evaluating the order conversion rate is very different between suburb and non-suburb areas. - Adding new features did not change model’s overall performance - What feature affect the most between the different datasets. Method: compared different datasets in the same model Findings: The location feature is more important than engagement features such as historical orders in the non-suburb dataset Conclusion: should zoom-in the location feature more to make it bit more accurate. Smaller hexagon size helps
  • 27.
    Scenario - A teamat Uber is evaluating the order conversion rate is very different between suburb and non-suburb areas. - Adding new features did not change model’s overall performance - What feature affect the most between the different datasets. Method: compared different datasets in the same model Findings: The location feature is more important than engagement features such as historical orders in the non-suburb dataset Conclusion: should zoom-in the location feature more to make it bit more accurate. Smaller hexagon size helps Case Studies Case 1. Identifying Useful Features
  • 28.
    Scenario A team wantto see what photos the model predicted incorrectly. Since our action involves a cost associated with a wrong prediction Method: ● generate importance for all low prediction score features ● generate our features by calling the label/object detection models from external Findings: Some object names not categorized properly Conclusion: Created one hot encoded features from dropped objects Created features that look at whether the string contains certain words Case Studies Case 2. Identifying false positive/negative
  • 29.
  • 30.
    Architecture Components: 1. Data processing a.Converting from pySpark to numpy b. Feature flatten for calculating gradients 2. Explainer a. Support multiple explainers (TreeShap/Kernel/IG…) 3. Model wrapper a. Support different model caller function or forward function. (keyword based or array based) b. Support calibration and aggregation or specific output layer 4. Importance aggregation a. Aggregate importance from multiple dimensions b. Feature mapping from output to input. XAI framework
  • 31.
    ● Support LLMexplanation in prompting engineer ● Feature selection assistant ● Interactive visualization tools Future opportunities
  • 32.