OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
Exploring uncertainty measures in deep networks for sclerosis
1. Exploring Uncertainty Measures in Deep
Networks for Sclerosis Lesion Detection
and Segmentation
Tanya Nair, Doina Precup, Douglas L. Arnold, Tal Arbel
Centre for Intelligent Machines, McGill University, Montr´eal, Canada
MICCAI 2018
Feb 14, 2019
Medical AI Lab
Kyuri Kim
2. Introduction
Problem
• Deep learning frameworks have a slow adoption into medical Imaging
• Lower performance on focal pathology
• Deterministic outputs
Solutions
• MC Dropout is a recent machine learning approach to quantify uncertainty in a DL network
• Use uncertainty information in medical information in medical imaging to support clinical
analysis
• How different MC Dropout uncertainty measures related to pathology segmentation and
detection predictions?
Contributions
• The first DL framework to produce voxel-level segmentation and
lesion-level detection of focal pathologies with associated uncertainty.
• Concrete demonstration that MC dropout uncertainty measures
correlate with incorrect predictions in the context of segmentation &
detection of Multiple Sclerosis(MS) lesions in a large real-world
clinical dataset
??
3. Related Work
˹Dropout as a bayesian approximation: Representing model uncertainty in deep learning.˼
Gal, Yarin, Ghahramani, Z., ICML pp. 1050–1059 (2016)
Bayesian Probability.
(H: hypothesis, D: data)
posterior
likelihood prior
evidence
DL vs Bayesian Neural Net.
• 학습을 통해 w가 고정되는 것이 아니라 w의 확률 분포가 결정
• W의 분포 (P(WID))를 구하고, 새로운 입력에 대한 Y의 분포 (P(Y’IX’, W))를 예측
4. Related Work
Dropout + NN의 objective를 잘 정리하면 BNN의 objective 처럼 만들어낼 수 있다.
Finally, Dropout + NN (+MC) = BNN
NN-dropout with math.
W: Drop outed weights
L2 weight decay
MSE NLL
BNN vs NN-dropout.
차이점: 1) Regularization Term: KLD vs. L2 weight decay
2) Scale of objection
5. Related Work
Ensemble에서 여러 모델에 대해 동일한 input을 넣어서 output을 구하면
Predictive mean/variance를 계산할 수 있다.
이렇게 구한 Predictive variance가 Uncertainty를 의미
Dropout은 ensemble의 일종이므로 Dropout을 통해서 동일한 효과를 얻어냄
6. Method
- 1064 Relapsing-remitting MS patients (2182 training, 251 validation, 251 test scans)
- T1, T2, FLAIR, PDW modalities used for training
- Ground Truth smaller than 3 voxel removed, as per clinical protocol
- 18-connected neighborhood for under-segmented ground truth
Lesion-level Prediction
Lesion-level Uncertainty
Lesion
Non-Lesion
Uncertain
7. Method
Dropout as a Bayesian Approximation.
Measure of Uncertainty in DL Networks.
X : pairs of multi-sequence 3D MRIs
Y : associated binary ground truth T2 lesion labels
posterior
3D Segmentation
Network
MC dropout
sampling
Prediction Variance
MC Sample Variance
Predictive Entropy
Mutual Information
: Entropy of the average prediction
: MI behave entropy of average prediction and the mean of each
sample’s entropy
: Learned measure associated with noise in the input.
The network outputs noise variance 𝑉 𝑊 and logit 𝐹 𝑊 to form prediction.
T Monte Carlo Samples
8. Result & Conclusion
Conclusions
• MC dropout uncertainty measures correlate with incorrect prediction.
• Uncertainty reflects challenges in small lesion detection, lesion contour segmentation.
• Demonstrated a mechanism for DL framework to be adopted into clinical workflows.
2-Class Classification
(Lesion/Non Lesion)
3-Class Classification with
Entropy uncertainty
False Positive
Uncertainty profile of FP
Uncertain
Uncertain
9. Result & Conclusion
(b) lesion-wise thresholding across
All Small
(3-10 vox)
Medium
(11-50 vox)
Large lesions
(51+ vox)
(a) voxel-wise thresholding at thresholds
thresholds : 0.5 thresholds : 0.1 thresholds : 0.01
10. Result & Conclusion
Entropy
Mutual
info
MC sample
variance
Predicted
Variance
Baseline th 1. th 2.
• e.-h. provides an example of uncertainties
• Although large lesions have larger, more
uncertain contours, the accumulation of
lesion-evidence within the boundary
provides an overwhelming certainty that
there is a lesion there.
→ FP in the baseline turn into TN as
uncertainty threshold is increased
→ TP in the baseline turn into FN