Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Jasmine	Wilkerson
jasminewilkerson@gmail.com
Application	of	Explanation	
Model	In	Healthcare
Agenda
• Why	Explanation	Model
• Which	Explanation	models
• Internal	vs	external	application
Interpretable	machine	learning
• Explanations of	AI/machine	learning	models	to	humans with	domain	knowledge
[Craik 1967, D...
Why	do	we	need	explanation	model?
• When	fairness	is	critical:		
• Any	context	where	humans	are	required	to	provide	explan...
Why	do	we	need	explanation	model?
Well,	the	computer	
assures	me	that	you	are	
fine	and	it	has	given	me	
100	reasons	for	i...
Why	do	we	need	explanation	model?
• Understanding	end-to-end	application	criteria	
o Who	are	the	end	users
o Scoring	scena...
Explanation	Model	Evaluation	Criteria
Performance
• What	is	the	acceptable	tolerance	of	model	performance	(Precision,	Reca...
Explanation	Model	Evaluation	Criteria
Model	Explanation	Fidelity
• Does	the	explanation	correspond	to	how	the	model	is	mak...
Explanation	Model	Evaluation	Criteria
Use	cases
Data
MIMIC	III	ICU	
Features
• Demographic
• Procedure
• Diagnosis
• Vital
• Lab
• Utilization
Prediction	Model
...
Shapley	Global	Explanation	- LOS
Shapley	Global	Explanation	Correlation	- LOS
Shapley	Global	Explanation	Features	Clusters	- LOS
Shapley	Local	Explanation
Patient	1
Predicted	LOS	:	2.04
Actual	LOS	:	2.35
Patient	2
Predicted	LOS	:	13.21
Actual	LOS	:	15...
"We	don't	believe	in	artificial	intelligence.	We	
believe	in	assistive	intelligence.	We	leave	the	
decision	on	how	to	act	...
Thanks	!!
Upcoming SlideShare
Loading in …5
×

of

Rsqrd AI: Application of Explanation Model in Healthcare Slide 1 Rsqrd AI: Application of Explanation Model in Healthcare Slide 2 Rsqrd AI: Application of Explanation Model in Healthcare Slide 3 Rsqrd AI: Application of Explanation Model in Healthcare Slide 4 Rsqrd AI: Application of Explanation Model in Healthcare Slide 5 Rsqrd AI: Application of Explanation Model in Healthcare Slide 6 Rsqrd AI: Application of Explanation Model in Healthcare Slide 7 Rsqrd AI: Application of Explanation Model in Healthcare Slide 8 Rsqrd AI: Application of Explanation Model in Healthcare Slide 9 Rsqrd AI: Application of Explanation Model in Healthcare Slide 10 Rsqrd AI: Application of Explanation Model in Healthcare Slide 11 Rsqrd AI: Application of Explanation Model in Healthcare Slide 12 Rsqrd AI: Application of Explanation Model in Healthcare Slide 13 Rsqrd AI: Application of Explanation Model in Healthcare Slide 14 Rsqrd AI: Application of Explanation Model in Healthcare Slide 15 Rsqrd AI: Application of Explanation Model in Healthcare Slide 16
Upcoming SlideShare
What to Upload to SlideShare
Next
Download to read offline and view in fullscreen.

0 Likes

Share

Download to read offline

Rsqrd AI: Application of Explanation Model in Healthcare

Download to read offline

In this talk, Jasmine Wilkerson talks about the need for explainability in healthcare.

Presented on 06/06/2019

**These slides are from a talk given at Rsqrd AI. Learn more at rsqrdai.org**

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Rsqrd AI: Application of Explanation Model in Healthcare

  1. 1. Jasmine Wilkerson jasminewilkerson@gmail.com Application of Explanation Model In Healthcare
  2. 2. Agenda • Why Explanation Model • Which Explanation models • Internal vs external application
  3. 3. Interpretable machine learning • Explanations of AI/machine learning models to humans with domain knowledge [Craik 1967, Doshi-Velez 2014] • Why is the prediction being made? • Comprehensible to humans in • (i) natural language • (ii) easy to understand representations
  4. 4. Why do we need explanation model? • When fairness is critical: • Any context where humans are required to provide explanations so that people cannot hide behind machine learning models [Al-Shedivat 2017B, Doshi-Velez 2014] • When consequences are far-reaching: • Predictions can have far reaching consequences e.g., recommend an operation, recommend sending a patient to hospice etc. • When the cost of a mistake is high: • Ex: misclassification of a malignant tumor can be costly and dangerous • When a new/unknown hypothesis is drawn: • “It's not a human move. I've never seen a human play this move.” [Fan Hui] • Pneumonia patients with asthma had lower risk of dying [Caruana 2015]
  5. 5. Why do we need explanation model? Well, the computer assures me that you are fine and it has given me 100 reasons for it. Doctor, are you sure that I am ok now? My head still hurts.
  6. 6. Why do we need explanation model? • Understanding end-to-end application criteria o Who are the end users o Scoring scenarios o Engineering requirements o Model Properties Explanation Model Local (n=1) Global (n=N) Explanation Type Model Specificity LIME X Relative Importance Agnostic SHAP Kernel Explainer X X Relative Importance Agnostic SHAP Tree Explainer X X Relative Importance Tree Based Models GAM X Graphical Self GA2M X Graphical Self ICE Plots X Graphical Agnostic Partial Dependence Plots X Graphical Agnostic Model Distillation X Graphical Agnostic Logistic Regression X Relative Importance Self Decision Trees X Rules Self XGB Explainer X Relative Importance XGBoost
  7. 7. Explanation Model Evaluation Criteria Performance • What is the acceptable tolerance of model performance (Precision, Recall, MAE, AUC etc) Scalability • Size of features • Scoring Time • Size of training set Model Composition • Ante-hoc: explanation are part of the models themselves • Post-hoc: Explanations are generated by other techniques from the predictive model
  8. 8. Explanation Model Evaluation Criteria Model Explanation Fidelity • Does the explanation correspond to how the model is making the prediction Model Specificity • Is the explanation model specific to a particular ML model or can it be used with all models Risk What is the risk associated with the outcome that is being predicted e.g., predicting end of life incorrectly has a much higher associated risk as compared to predicting pharmacy cost
  9. 9. Explanation Model Evaluation Criteria
  10. 10. Use cases Data MIMIC III ICU Features • Demographic • Procedure • Diagnosis • Vital • Lab • Utilization Prediction Model Xgboost Explanation Model Shapley Value Problem Statement: Predict Length Of Stay of ICU encounters
  11. 11. Shapley Global Explanation - LOS
  12. 12. Shapley Global Explanation Correlation - LOS
  13. 13. Shapley Global Explanation Features Clusters - LOS
  14. 14. Shapley Local Explanation Patient 1 Predicted LOS : 2.04 Actual LOS : 2.35 Patient 2 Predicted LOS : 13.21 Actual LOS : 15.21 Coronary atherosclerosis and other heart disease Acute and unspecified renal failure Congestive heart failure; nonhypertensive Essential hypertension Respiratory failure; insufficiency; arrest (ad... Cardiac arrest and ventricular fibrillation Acute myocardial infarction Pneumonia (except that caused by tuberculosis ... Congestive heart failure; non-hypertensive Epilepsy; convulsions Coma; stupor; and brain damage Peripheral and visceral atherosclerosis Respiratory failure; insufficiency; arrest (ad... Hepatitis Diverticulosis and diverticulitis
  15. 15. "We don't believe in artificial intelligence. We believe in assistive intelligence. We leave the decision on how to act in the hands of well-trained experts, like the physicians.” says Dr. Ankur Teredesai, CTO, KenSci Inc.
  16. 16. Thanks !!

In this talk, Jasmine Wilkerson talks about the need for explainability in healthcare. Presented on 06/06/2019 **These slides are from a talk given at Rsqrd AI. Learn more at rsqrdai.org**

Views

Total views

64

On Slideshare

0

From embeds

0

Number of embeds

6

Actions

Downloads

1

Shares

0

Comments

0

Likes

0

×