Building trustworthy, transparent and unbiased machine learning models?
Get started with explainX that brings state-of-the-art explainability techniques under one roof accessible via one-line of code.
Learn the major modules within the explainX explainable AI and model interpretability framework.
These slides are taken from Raheel's presentation at the UnpackAI's forum on Data Ethics in AI.
3. Are you confident about your AI performance?
Do you have the required visibility into it?
Biased?
Accuracy? Trustworthy?
4. By 2022, according to Gartner, 85% of AI projects will deliver
erroneous outcomes due to bias in data, algorithms and lack of
interpretability & trust in most AI models.
8. - GDPR concerns around lack of explainability in AI (European Commission)
- “Companies should commit to ensuring systems that could fall under GDPR,
including AI, will be compliant. The threat of sizeable fines of 20 Million Euros or 4% of
global turnover provides a sharp incentive.” (GDPR)
- “Article 22 of GDPR empowers individuals with the right to demand an
explanation of how an AI system made a decision that affects them.” (GDPR)
- Growing Global AI Regulation
- “Algorithmic Accountability Act 2019: Requires companies to provide an assessment
of the risks posed by the automated decision system to the privacy or security and the
risks that contribute to inaccurate, unfair, biased, or discriminatory decisions
impacting consumers.”
- “Washington Bill 1655: Establishes guidelines for the use of automated decision
systems to protect consumers, improve transparency, and create more market
predictability.”
9. MOST MODELS ARE BLACKBOXES!
AI Model❖ No Visibility
❖ No Explanations
❖ No Monitoring
Business User
Can I trust our AI decisions?
Customer Support
How do I answer customer query?
Data Scientist
Is my model accurate & trustworthy?
IT & Operations
How do I monitor & debug my model?
Regulators
Is the AI model fair?
11. How can we make interpretability more
accessible to data scientists, ML engineers
& researchers?
12. explainX.ai
OPEN SOURCE MODEL INTERPRETABILITY FRAMEWORK FOR MODEL DEVELOPERS
GLOBAL MODEL
EXPLAINABILITY
LOCAL PREDICTION
EXPLANATIONS
FEATURES
ANALYSIS
DATA
DISTRIBUTIONS
13. ATTRIBUTION ALGORITHMS
+ Integrated Gradients
+ DeepLift
+ SHAP
+ LIME
FEATURE INTERACTION METHODS
+ ALEs & PDPs
+ Auto-Bias Detection
+ What-If Analysis
+ Data Distributions
RULES-BASED ALGORITHMS
+ Prototypes
+ DICE (Counterfactuals)
+ Anchors (Scoped Rules)
+ Decision Trees
explainX.ai Platform
+ API Call from Jupyter Notebook/IDE
+ Interactive Visualizations
+ Sharing dashboards & actionable insights
PRODUCT - ARCHITECTURE
AI MODELS
MODEL EXPLANATIONS MODEL DEBUGGING MODEL MONITORING
DATA
EXPLAINABLE AI ENGINE
USER LAYER
RESULT LAYER
14. HOW CAN WE VISUALIZE &
EXPLAIN COMPLEX MODELS?
15. SINGLE API CALL
PRODUCT: explainX.ai v1.0
WEB APPLICATION ACCESSIBLE WITH A SINGLE API CALL
WATCH MVP DEMO
16. SINGLE API CALL
PRODUCT: explainX.ai v1.0
Any ML Model
Interactive and detailed
explanations
One Line of Code
In just a single line of code, data scientists can integrate our explainability module into their personal
workspace
EXPLAIN YOUR BLACKBOX MODEL NOW
17. HOW CAN WE VISUALIZE &
EXPLAIN COMPLEX MODELS?
25. How can users take action from explanations?
MONITOR BIAS &
PERFORMANCE
EXPLAIN MODEL
DECISIONS
BUILD TRUST IN
MODEL LOGIC
MAKE INFORMED
DECISIONS
26. CURRENT VERSION TRY IT OUT! FUTURE PIPELINE
Feature attribution based
algorithms.
Prototypes & examples based
algorithms
Feature interaction plots
An interactive platform for model
understanding & debugging
https://explainx.ai/register Adding more algorithms for
counterfactuals, anchors and bias
detection.
Scaling for enterprise adoption
Building support for complicated
deep learning models
Real-time monitoring