Presented at MET4FOF Workshop, JULY 2020
I talk about our recent work of combining Bayesian Deep learning with Explainable Artificial Intelligence (XAI) methods. In particular, we look at Bayesian Autoencoders.
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Uncertainty Quantification with Unsupervised Deep learning and Multi Agent System
1. Bang Xiang Yong
Alexandra Brintrup
Uncertainty Quantification with
Unsupervised Deep learning and Multi-agent system
2. Introduction
1. Trend: Machine learning (ML) techniques are a core pillar in Industry 4.0 paradigm
2. Idea: Train a model on a set of data, and predict on unseen data
3. Difference from conventional statistical models:
i. High dimensional data (Heterogeneous sensors)
ii. Complex and non-linear
iii. Dynamic environment (Data is rarely stationary)
4. Example: ZEMA Dataset for Prognosis
i. Given sensors measurements and time of Run-to-Failure of electromechanical cylinders, fit a
model and predict on new machines.
ii. Example of ML pipeline:
Data Stream FFT BFC
Pearson
Correlation
Linear
Discriminant
Analysis
Evaluation
X: Sensors data
Y: % Degradation
Accuracy (classification)
MSE (regression)
Preprocessing & Model
3. Challenges
• Q: Can we trust and explain the ML model?
• Problem 1: Lack of uncertainty intervals.
Does the model know what it doesn't know?
• Predictive error is not the same as uncertainty!
• In practice, we want predictions to be 50% +- 3% with “dynamic” uncertainty,
• e.g With a perturbed sensor, we should get 50% +- 10% !
• Problem 2: We have 11 sensors, which sensors are contributing to the model prediction?
• During test, we can only observe the prediction i.e 90% health remaining.
• Why is the model predicting 90%, instead of 30%? Or 60%?
• Which sensors led it to the decisions?
• Problem 3: Limited data availability – ideally we want to have as many run-to failure-examples to
learn from!
• But in practice, we have only very few..
• Testing the trained model on other assets yields unstable performance.
Poor vs Well calibrated uncertainty of classification
Explaining predictions
via contribution of features
4. Contribution
Development of Hierarchical & Coalitional Bayesian Autoencoders (BAE)
•Unsupervised model. Addressing the lack of faulty data availability.
•Bayesian deep learning. Probabilistic framework – Principled approach towards high
dimensional data with uncertainty bands.
•Explainable AI predictions (XAI): Individual contributions by sensor and features with
uncertainty.
•Multi-agent system. Complex model management and time-series simulation.
Screenshot of agent-
based framework
applying BAE on
sensor network
5. Addendum: In development - Metrological Agents
Joint work with PTB
• Metrological meta-data class
• Time-series buffer
(Bjoern Ludwig & Max Gruber)
6. Research Outputs
1. B. X. Yong, T. Pearce and A. Brintrup, "Bayesian Autoencoders: Analysing and Fixing the Bernoulli likelihood for Out-of-Distribution
Detection," 2020. ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning.
2. B. X. Yong, Y. Fathy and A. Brintrup, "Bayesian Autoencoders for Drift Detection in Industrial Environments," 2020 IEEE International
Workshop on Metrology for Industry 4.0 & IoT, Roma, Italy.
3. Bang X. Yong, A. Brintrup. "Multi Agent System for Machine Learning Under Uncertainty in Cyber Physical Manufacturing System". 9th
Workshop on Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future
1. agentMET4FOF : Agent-based framework for metrologically-enabled distributed sensor data analytics
• https://github.com/bangxiangyong/agentMET4FOF
2. baetorch : Bayesian autoencoder library
• https://github.com/bangxiangyong/baetorch
Conference papers
Software packages
Work in progress
1. Journal paper on Hierarchical and Coalitional BAE for industrial sensors
2. Video tutorials and metrological agents (agentMET4FOF) with PTB
7. Bayesian Autoencoders as Bayesian Neural Networks
With input X, autoencoder (parameterised by theta) reconstructs the input with signal X̂
Training with Bayes Rule:
(unlabelled data)
(reconstructed data)
Prediction:
Mean and variance of log-likelihood of new data x*
(conditioned on training data)
Gaussian Log-Likelihood:
Methods of Sampling from posterior:
• MCMC
• Variational Inference
• Bayesian ensembling
Also known as
reconstruction loss
9. Bayesian Autoencoders for Out-of-Distribution & Drift Detection
Choice of likelihood matters in detecting out-of-
distribution inputs!
B. X. Yong, Y. Fathy and A. Brintrup, "Bayesian Autoencoders for Drift Detection in Industrial Environments," 2020 IEEE
International Workshop on Metrology for Industry 4.0 & IoT, Roma, Italy.
B. X. Yong, T. Pearce and A. Brintrup, "Bayesian
Autoencoders: Analysing and Fixing the Bernoulli likelihood for Out-of-
Distribution Detection," ICML 2020 Workshop on Uncertainty and
Robustness in Deep Learning.
Ability to distinguish types of drifts (real vs virtual
drifts) on ZEMA hydraulic condition monitoring
dataset
B. X. Yong, Y. Fathy and A. Brintrup, "Bayesian Autoencoders for Drift Detection in
Industrial Environments," 2020 IEEE International Workshop on Metrology for Industry 4.0
& IoT, Roma, Italy.
10. WIP : Hierarchical & Coalitional BAE
(Experiments on ZEMA EMC Prognosis)
ZEMA EMC Dataset (@zenodo):
i. Assets : 3x run-to-failures
ii. Num cycles (examples)-
6292,6083,5732 cycles
iii. 11 Sensors
iv. Train (10% of initial cycles)
v. Test (90% of remaining cycles)
vi. Each cycle has measurements of -
• 2000 time steps * 11 sensors
• Apply FFT -> 1000 * 11 sensors
https://zenodo.org/record/2702226
11. WIP: Hierarchical & Coalitional BAE (Configurations)
Vanilla BAE
(1 BAE for 11 sensors)
Coalitional BAE
(1 BAE per sensor)
Single Asset Multi Asset Aggregation
12. Hierarchical & Coalitional BAE (Axis 3 Lifetime = 6292 cycles)
Vanilla BAE
(1 BAE for 11 sensors)
Coalitional BAE
(1 BAE per sensor)
Single Asset Multi Asset Aggregation
Qualitatively, total log-likelihood gain for
vanilla vs coalitional appears indifferent.
Combining knowledge from other assets, uncertainty
appears higher as condition moves away from healthy
condition (better calibrated?).
13. Quantitative Comparison
Note: PCC = Pearson Correlation
Intuition: We should expect the LL Gain at Degradation =50% (Half-
life) to be proportional with its overall life time.
• The higher the LL Gain, the healthier it is (vice versa).
14. WIP: Explainable AI under Uncertainty with BAE
Vanilla BAE
(1 BAE for 11 sensors)
Coalitional BAE
(1 BAE per sensor)
Single Asset Multi Asset Aggregation
Outputs of Vanilla BAE are highly correlated, which may give misleading
explanation in identifying sensor contribution
15. WIP: Explainable AI under Uncertainty with BAE
• Can we tell which sensor was injected with noise?
Vanilla BAE Coalitional BAE
16. Contributions
1. Hierarchical & Coalitional Bayesian Autoencoders:
•Bayesian deep learning for unsupervised models.
•Explainable AI predictions (XAI).
•Multi-agent system.
2. Code:
• agentMET4FOF: https://github.com/bangxiangyong/agentMET4FOF
• baetorch: https://github.com/bangxiangyong/baetorch
• MET4FOF repo: https://github.com/Met4FoF
3. Thanks to our collaborators: (open to more!)
• PTB
• NPL
• VSL
•IMBIH
•ZEMA
•STRATH
17. Appendix: Model setup
Vanilla BAE Architecture Coalition BAE Architecture
Each BAE is trained using
automatic learning rate
finder by Leslie Smith for
250 epochs
21. Appendix:
baetorch – Bayesian Autoencoder library
Features
•Quantify epistemic uncertainty using approximate
Bayesian inference
•MC-Dropout
•Bayesian Ensembling (with Anchored priors)
•Variational Inference (Bayes by Backprop)
•Options for specifying data likelihood p(X|theta) to
Gaussian or Bernoulli
•Quantify (homo/heteroskedestic) aleatoric uncertainty
using Gaussian Likelihood
•Automatic learning rate finder for Bayesian
Autoencoders