SlideShare a Scribd company logo
1 of 49
Download to read offline
JOSE M. JUAREZ
AIKE research group, University of Murcia
FRAMING TRUST
IN MEDICAL AI
EURAI ACAI - TAILOR SUMMER SCHOOL 2022
JUN 14TH 2022, BARCELONA
Contact:
Email: jmjuarez@um.es
Twitter: @juarezprofesor
LinkedIn: jose-m-juarez
2
TRUST IN MEDICAL AI
? ? ? ?
FRAMING TRUST IN MEDICAL AI
3
Unknown technology
Ethical or legal
Trustworthy
FRAMING TRUST IN MEDICAL AI
4
AI-TECHNOLOGY
IN HEALTHCARE
Unknown technology
Ethical or legal
Trustworthy
Picture from N Melchor UnSplash https://unsplash.com/es/fotos/43LwvC-
eQPM?utm_source=unsplash&utm_medium=referral&utm_content=creditShareLink
FRAMING TRUST IN MEDICAL AI
5
TRUST IN MEDICAL AI
? ? ? ?
FRAMING TRUST IN MEDICAL AI
• OUTLINE:
1. HUMANS, HEALTHCARE AND ETHICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
ENVIRONMENTS
3. REGULATIONS FOR MEDICAL INDUSTRY
4. FINAL
6
FRAMING TRUST IN MEDICAL AI
• OUTLINE:
1. HUMANS, HEALTHCARE, ETHICS AND AI
2. AI UNDERSTANDABLE IN HEALTHCARE
ENVIRONMENTS
3. REGULATIONS FOR MEDICAL INDUSTRY
4. FINAL
7
FRAMING TRUST IN MEDICAL AI
1. HUMANS, HEALTHCARE ETHICS AND AI
– WHY IS HEALTHCARE DOMAIN DIFFERENT?
– WHY IS AI DIFFERENT FROM OTHER TOOLS?
– WHAT IS MEDICAL AI?
– AI ETHICAL CONCERNS & INITIATIVES
8
FRAMING TRUST IN MEDICAL AI
1. HUMANS, HEALTHCARE ETHICS AND AI
– WHY IS HEALTHCARE DOMAIN DIFFERENT?
– Discipline dependence
– Def. Infection CDC
– René Laennec
9
FRAMING TRUST IN MEDICAL AI
Picture by 8photo from https://www.freepik.es/fotos/doctor-estetoscopio
1. HUMANS, HEALTHCARE ETHICS AND AI
– WHY AI IS DIFFERENT FROM A STETHOSCOPE?
– Aim
– Multidisciplinary
10
FRAMING TRUST IN MEDICAL AI
Picture by 8photo from https://www.freepik.es/fotos/doctor-estetoscopio
1. HUMANS, HEALTHCARE, ETHICS AND AI
– WHAT IS MEDICAL AI?
clinical dataset + ML ?= MEDICAL AI
Solve medical problem with AI+ bioethical approval
Terminology & clinical semantics
Clinical data compilation
AI/ML fundamentals + tailored techniques
Medical validation
11
FRAMING TRUST IN MEDICAL AI
1. HUMANS, HEALTHCARE, ETHICS AND AI
– ETHICS AND AI
• ETHICS DEF
• HIGH LEVEL PPLES.
–IBM’s principles of trust and transparency
–Google’s principles on AI
–World Economic Forum’s principles for ethical AI
–AI4PEOPLE principles and recommendations
–IEEE general principles
–… 12
FRAMING TRUST IN MEDICAL AI
- ETHICS GUIDELINES FOR
TRUSTWORTHY AI
– 2019
– High-Level Expert Group of AI (indep.
group)
– Framework for trustworthy AI
• Foundations: 4 Ethical principles
(address tensions)
• Realisation of Trustworthy AI: 7 key
requirements (evaluate)
• Assessment of Trustworthy AI:
assessment List (specific AI apps)
13
Picture from: 2019-11-18 High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI.
https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1
FRAMING TRUST IN MEDICAL AI
1. HUMANS, HEALTHCARE AND ETHICAL AI
• ETHICS GUIDELINES FOR TRUSTWORTHY AI
– 4 Ethical Principles in the context of AI systems
i. Respect to human autonomy
ii. Prevention of harm
iii. Fairness
iv. Explicability
14
FRAMING TRUST IN MEDICAL AI
• ETHICS GUIDELINES FOR TRUSTWORTHY AI
– Developing Trustworthy AI: 7 requirements
i. Human actions and supervision
ii. Robustness and safety
iii. Privacy and data governance
iv. Transparency
v. Diversity, non-discrimination and fairness
vi. Societal and environmental wellbeing
vii. Accountability
15
FRAMING TRUST IN MEDICAL AI
– USE CASE 1: Conversational agent during COVID-19
16
FRAMING TRUST IN MEDICAL AI
1. HUMANS, HEALTHCARE AND ETHICAL AI
– USE CASE 2: Pneumonia detection from X-rays
17
FRAMING TRUST IN MEDICAL AI
1. HUMANS, HEALTHCARE AND ETHICAL AI
• OUTLINE:
1. HUMANS, HEALTHCARE AND ETHICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
3. REGULATIONS FOR THE MEDICAL INDUSTRY
4. FINAL
18
FRAMING TRUST IN MEDICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
– LOST IN TRANSLATION
19
FRAMING TRUST IN MEDICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME GENERAL CONSENSUS
• TRANSPARENT AI:
–Expert Rules
–Interpretable models
–Well known in medical field
• BLACK BOX models:
–Random forest
–XGBoost
–ANN and complex architectures (Deep Learning)
• Focus on providing EXPLANATIONS
20
FRAMING TRUST IN MEDICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
– EXPLANATIONS & XAI
• Explanation methods
–Model Agnostic vs. Dependant
–Local vs. Global
–Surrogate models
–Example based
–Visualizations
21
FRAMING TRUST IN MEDICAL AI
Picture by Ybonne Doinell at https://christophm.github.io/interpretable-ml-book/
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME POPULAR XAI METHODS
• LIME[Ribeiro2016]
–Model-agnostic
–Local
–Surrogated
–Intuitions
22
FRAMING TRUST IN MEDICAL AI
Picture from Figure 3 from [Ribeiro 2016]
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why
should I trust you?: Explaining the predictions of any classifier.
Proceedings of the 22nd ACM SIGKDD international conference on
knowledge discovery and data mining. ACM (2016).
arXiv:1602.04938
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME POPULAR XAI METHODS
• LIME[Ribeiro2016]
1. Select instance of interest from dataset
2. Perturb dataset + ●
3. Weight new points
4. Build new ML model
5. Explain prediction
23
FRAMING TRUST IN MEDICAL AI
Picture from Figure 3 from [Ribeiro 2016]
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why
should I trust you?: Explaining the predictions of any classifier.
Proceedings of the 22nd ACM SIGKDD international conference on
knowledge discovery and data mining. ACM (2016).
arXiv:1602.04938
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME POPULAR XAI METHODS
• SHAPLEY VALUES
–Local/Global model-agnostic
–ML is a game (coallitions)
» Feature = Player
» Prediction = Reward
» Game = Task of Prediction
–Shapley Value: How fairly distribute reward btw
features
24
FRAMING TRUST IN MEDICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME POPULAR XAI METHODS
• SHAPLEY VALUES
–Given an input, it is the avg marginal contribution of a
attribute value across all possible coalitions.
– E.g. dataset = attr1 , attr2, attr3 , input X
study attr1=vala àcompute all coalitions
attr1=vala + attr2=val2a à▲
attr1=vala + attr2=val2b à▽
attr1=vala + attr3=val3a à▲
attr1=vala + attr3=val3b à▽ 25
FRAMING TRUST IN MEDICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME POPULAR XAI METHODS
• SHAPLEY VALUES
– Shapley values hard to compute
–Estimation of Shapley values (sampling, MonteCarlo)
–SHAP, Kernel SHAP, Tree SHAP[LundbergLee2017]
26
FRAMING TRUST IN MEDICAL AI
Picture extracted from
[Lundberg-Lee2017]Scott Lundberg and Su-In Lee. A unified approach to
interpreting model predictions. CoRR, abs/1705.07874, 2017. arXiv:1705.07874.
2. AI UNDERSTANDABLE IN HEALTHCARE
– SOME POPULAR XAI METHODS
• SALIENCY MAPS
–Vision: marks region people’s eyes focus first
–XAI: highlight pixels relevant for ANN classification
27
FRAMING TRUST IN MEDICAL AI
Picture from:
[Alqaraawi2020] Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. Evaluating saliency map explanations for
convolutional neural networks: a user study.Proceedings of the 25th International Conference on Intelligent User Interfaces . 2020. ACM. arXiv:2002.00772
2. AI UNDERSTANDABLE IN HEALTHCARE
– BLACK BOX MEDICINE AND CONCERNS
• Clever Hans phenomenon
–Generalizable model or spurious correlation
28
FRAMING TRUST IN MEDICAL AI
Oskar Pfungst. Clever Hans (The Horse Of Mr. Von Osten). A Contribution To Experimental Animal and Human Psychology. 1911.
Picture from the Blog of Responsible AI Microsoft by John Roach https://blogs.microsoft.com/ai/azure-responsible-machine-learning/
2. AI UNDERSTANDABLE IN HEALTHCARE
– BLACK BOX MEDICINE AND CONCERNS
• Clever Hans in medicine
– Researchers from Mount Sinai hospital
– High-risk patients detection from x-ray imaging
– Applied outside Mount Sinai
» Very low rate predictions
» Why?
29
FRAMING TRUST IN MEDICAL AI
[Amann 2020] Amann, J., Blasimme, A., Vayena, E. et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.
BMC Med Inform Decis Mak 20, 310 (2020). https://doi.org/10.1186/s12911-020-01332-6
DICOM
2. AI UNDERSTANDABLE IN HEALTHCARE
– BLACK BOX MEDICINE AND CONCERNS
• SALIENCY MAPS
30
FRAMING TRUST IN MEDICAL AI
Picture from
[Gu et al 2019] Gu J, Tresp V. Saliency methods for explaining adversarial attacks. arXiv 2019; published online Aug 22. http://arxiv.org/ abs/1908.08413 (preprint).
2. AI UNDERSTANDABLE IN HEALTHCARE
– BLACK BOX MEDICINE AND CONCERNS
• ARE EXPLANATIONS UNDERSTANDABLE BY DOCS?
LIME SHAP
31
FRAMING TRUST IN MEDICAL AI
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining. ACM (2016)
Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions. CoRR, abs/1705.07874, 2017. arXiv:1705.07874.
Navarro-Nicolas J.A. Explaining Tree SHAP. Undergraduate Thesis. University of Murcia 2021.
2. AI UNDERSTANDABLE IN HEALTHCARE
– XAI DETRACTORS IN MEDICAL AI
• Final users: not the explanations needed
• AI Developpers “debugging”
32
FRAMING TRUST IN MEDICAL AI
• OUTLINE:
1. HUMANS, HEALTHCARE AND ETHICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
3. REGULATIONS FOR THE MEDICAL INDUSTRY
4. FINAL
33
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
– GDPR sensible data protection
– EU initiatives on regulating AI
– EU Medical Device Regulation
34
FRAMING TRUST IN MEDICAL AI
• GDPR and EXPLAINABILITY CONCERN
– General Data Protection Regulation
(GDPR) –non directive
– Harmonize data privacy laws across
Europe
– Enforceable / applicable as of May
25th, 2018
– Right of explanations of
algorithmic decisions
• Section 5
• Controversial, unclear from
legal scholars perspective.
35
PICTURE: GDPR
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
• GDPR and EXPLAINABILITY CONCERN
– Terms not included in this EU regulation
• Explanation right / right of explanation / explainability
• Artificial Intelligence
• Machine Learning
– Terms included in this EU regulation:
• Fairness
• Transparency
• Explanation of the decision
• Automated decision-making
36
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
• EU LEGAL FRAMEWORK OF AI
– Ongoing since 2018
– AI might create some risk
for safety of consumers
and fundamental rights.
– Risk-based legal framework
proposal (April 2021)
– Independent of origin of
producer or user.
37
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
• EU LEGAL FRAMEWORK OF AI
– Level: Unacceptable risk
– AI that contradicts EU values is prohibited (Title II, Art. 5)
– Subliminal manipulation resulting physical/psych harm
– Exploiting vulnerabilities resulting in phy/psy harm
– Social scoring by public authorities
– Real-time remote biometric identification for law
enforcement purposes in publicly accessible spaces
38
Picture: from the presentation made by Salvatore Scalzo, European Commission “Proposal for EU Legal Framework” 2022.
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
• EU LEGAL FRAMEWORK OF AI
– Level: High Risk
– Safety components of regulated products (e.g. medical
devices)
– Stand-alone AI-based systems for:
• Biometric identification of a person
• Operation of critical infrastructures
• Educational training
• Employment and workers management
• Law enforcement
• Migration, asylum and border control.
• Administration of justice 39
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
40
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
– USE CASE 1
Impersonation
Poor explanation
Make decisions affecting life quality
No explanation
“right not to be subject to a decision
based solely on automated processing”
41
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
– MEDICAL INDUSTRY: STANDARDS & REGULATIONS
• HOW TO MITIGATE RISKS
• FOCUS: quality, safety, effectiveness and performance
• US FDA: Title 21 of Code of Federal Regulations
• EU: 2017/745 MEDICAL DEVICE REGULATIONS
• STANDARDS:
– ISO 13485 Medical Devices (quality systems)
– ISO 14971 Risk management for medical devices
– IEC 62304 Software lifecycle for medical devices
– IEC 60601-1 Programmable electrical medical systems
– IEC 82304-1 Software as medical devices
– Etc.
42
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
– MEDICAL INDUSTRY: STANDARDS & REGULATIONS
• REGULATION (EU) 2017/745 on Medical Devices (MDR)
– Fully applicable 26/May/2021
– MEDICAL DEVICE DEFINITION (art. II)
“Medical Device: any instrument, apparatus, appliance, software,
implant, […] for human beings for one or more of the following
specific medical purposes: diagnosis, prevention, monitoring,
prediction, prognosis, treatment or alleviation of disease […].”
43
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
– MEDICAL INDUSTRY: STANDARDS & REGULATIONS
• REGULATION (EU) 2017/745 on Medical Devices (MDR)
SOFTWARE CAN BE MEDICAL DEVICES:
– OPERATING SYSTEMS?
– MANAGEMENT AND ADMINISTRATIVE SOFWARE?
– MEDICAL TRAINING PURPOSES (goal not patients)?
– SOFTWARE CONTROLING MEDICAL DEVICE?
– ALGORITHM POST PROCESSING BIOSIGNAL?
– SOFTWARE POST PROCESSING FOR DATA PREPARATION?
– DIAGNOSIS ALGORITHM?
– TREATMENT ALGORITHM?
44
FRAMING TRUST IN MEDICAL AI
3. REGULATIONS FOR THE MEDICAL INDUSTRY
– MEDICAL INDUSTRY: STANDARDS & REGULATIONS
• REGULATION (EU) 2017/745 on Medical Devices (MDR)
– USE CASE 1: MEDICAL DEVICE? CLASS?
– USE CASE 2: MEDICAL DEVICE? CLASS?
• OUTLINE:
1. HUMANS, HEALTHCARE AND ETHICAL AI
2. AI UNDERSTANDABLE IN HEALTHCARE
3. REGULATIONS FOR THE MEDICAL INDUSTRY
4. FINAL
45
FRAMING TRUST IN MEDICAL AI
46
FRAMING TRUST IN MEDICAL AI
4. FINAL
Ethical Guidelines
Trustworthy AI
Lost in translation!
XAI
LIME, SHAP, Saliency
GDPR
EU AI initiatives
Standards & MDR
Medical AI
Black Boxes
47
FRAMING TRUST IN MEDICAL AI
4. FINAL
TRUST IN MEDICAL AI
HUMANS
ETHICS
HEALTHCARE
DOCTORS
PATIENTS
TECHNOLOGY
UNDERSDAND.
XAI
INDUSTRY
REGULATIONS
STANDARDS
Contact: Jose M. Juarez
jmjuarez@um.es
FRAMING TRUST IN MEDICAL AI
SEMINAR: FRAMING TRUST IN MEDICAL AI
REFERENCES & READINGS
FRAMING TRUST IN MEDICAL AI
Materials & references of the seminar at:
https://webs.um.es/jmjuarez/trustAImedicine/

More Related Content

What's hot

Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
Krishnaram Kenthapadi
 

What's hot (20)

Deep learning for medical imaging
Deep learning for medical imagingDeep learning for medical imaging
Deep learning for medical imaging
 
Responsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillarsResponsible Data Use in AI - core tech pillars
Responsible Data Use in AI - core tech pillars
 
Implementing Ethics in AI
Implementing Ethics in AIImplementing Ethics in AI
Implementing Ethics in AI
 
Artificial Intelligence in Medicine
Artificial Intelligence in MedicineArtificial Intelligence in Medicine
Artificial Intelligence in Medicine
 
Explainable AI in Healthcare
Explainable AI in HealthcareExplainable AI in Healthcare
Explainable AI in Healthcare
 
Using Generative AI
Using Generative AIUsing Generative AI
Using Generative AI
 
An Introduction to Generative AI
An Introduction  to Generative AIAn Introduction  to Generative AI
An Introduction to Generative AI
 
Unlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdfUnlocking the Power of Generative AI An Executive's Guide.pdf
Unlocking the Power of Generative AI An Executive's Guide.pdf
 
AI Governance – The Responsible Use of AI
AI Governance – The Responsible Use of AIAI Governance – The Responsible Use of AI
AI Governance – The Responsible Use of AI
 
AIDRC_Generative_AI_TL_v5.pdf
AIDRC_Generative_AI_TL_v5.pdfAIDRC_Generative_AI_TL_v5.pdf
AIDRC_Generative_AI_TL_v5.pdf
 
AI in Healthcare: From Hype to Impact (updated)
AI in Healthcare: From Hype to Impact (updated)AI in Healthcare: From Hype to Impact (updated)
AI in Healthcare: From Hype to Impact (updated)
 
Journey of Generative AI
Journey of Generative AIJourney of Generative AI
Journey of Generative AI
 
AI in healthcare - Use Cases
AI in healthcare - Use Cases AI in healthcare - Use Cases
AI in healthcare - Use Cases
 
Introduction to AI Ethics
Introduction to AI EthicsIntroduction to AI Ethics
Introduction to AI Ethics
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Generative AI and law.pptx
Generative AI and law.pptxGenerative AI and law.pptx
Generative AI and law.pptx
 
Generative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdfGenerative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdf
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
 
Leveraging Generative AI & Best practices
Leveraging Generative AI & Best practicesLeveraging Generative AI & Best practices
Leveraging Generative AI & Best practices
 
The current state of generative AI
The current state of generative AIThe current state of generative AI
The current state of generative AI
 

Similar to Framing Trust in Medical AI: Seminar EurAI ACAI

Artificial-Intelligence-AI-in-healthcare-and-research.pdf
Artificial-Intelligence-AI-in-healthcare-and-research.pdfArtificial-Intelligence-AI-in-healthcare-and-research.pdf
Artificial-Intelligence-AI-in-healthcare-and-research.pdf
raja118355
 
The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...
The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...
The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...
ijtsrd
 
Artificial intelligence in medicine (projeck)
Artificial intelligence in medicine (projeck)Artificial intelligence in medicine (projeck)
Artificial intelligence in medicine (projeck)
YasserAli152984
 
Artificial-Intelligence-and-Clinical-Trials.pptx
Artificial-Intelligence-and-Clinical-Trials.pptxArtificial-Intelligence-and-Clinical-Trials.pptx
Artificial-Intelligence-and-Clinical-Trials.pptx
avozik1
 
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
DataScienceConferenc1
 

Similar to Framing Trust in Medical AI: Seminar EurAI ACAI (20)

Explainable AI in Healthcare: Enhancing Transparency and Trust upon Legal and...
Explainable AI in Healthcare: Enhancing Transparency and Trust upon Legal and...Explainable AI in Healthcare: Enhancing Transparency and Trust upon Legal and...
Explainable AI in Healthcare: Enhancing Transparency and Trust upon Legal and...
 
Artificial intelligence in healthcare
Artificial intelligence in healthcareArtificial intelligence in healthcare
Artificial intelligence in healthcare
 
Artificial-Intelligence-AI-in-healthcare-and-research.pdf
Artificial-Intelligence-AI-in-healthcare-and-research.pdfArtificial-Intelligence-AI-in-healthcare-and-research.pdf
Artificial-Intelligence-AI-in-healthcare-and-research.pdf
 
The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...
The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...
The Role of Artificial Intelligence in Revolutionizing Healthcare A Comprehen...
 
Hamid_2016-2
Hamid_2016-2Hamid_2016-2
Hamid_2016-2
 
mechanical ppt.pptx
mechanical ppt.pptxmechanical ppt.pptx
mechanical ppt.pptx
 
Data preparation for artificial intelligence in medical imaging - A comprehen...
Data preparation for artificial intelligence in medical imaging - A comprehen...Data preparation for artificial intelligence in medical imaging - A comprehen...
Data preparation for artificial intelligence in medical imaging - A comprehen...
 
Artificial intelligence in nursing
Artificial intelligence in nursingArtificial intelligence in nursing
Artificial intelligence in nursing
 
Cemal H. Guvercin MedicReS 5th World Congress
Cemal H. Guvercin MedicReS 5th World Congress Cemal H. Guvercin MedicReS 5th World Congress
Cemal H. Guvercin MedicReS 5th World Congress
 
Ophthalmology & Optometry 2.0
Ophthalmology & Optometry 2.0Ophthalmology & Optometry 2.0
Ophthalmology & Optometry 2.0
 
Ai applied in healthcare
Ai applied in healthcareAi applied in healthcare
Ai applied in healthcare
 
Artificial intelligence in medicine (projeck)
Artificial intelligence in medicine (projeck)Artificial intelligence in medicine (projeck)
Artificial intelligence in medicine (projeck)
 
AI in the Covid-19 pandemic
AI in the Covid-19 pandemicAI in the Covid-19 pandemic
AI in the Covid-19 pandemic
 
Kun-Hsing Yu "AI vs MD: Will Machines Replace Doctors?"
Kun-Hsing Yu "AI vs MD: Will Machines Replace Doctors?"Kun-Hsing Yu "AI vs MD: Will Machines Replace Doctors?"
Kun-Hsing Yu "AI vs MD: Will Machines Replace Doctors?"
 
Artificial-Intelligence-and-Clinical-Trials.pptx
Artificial-Intelligence-and-Clinical-Trials.pptxArtificial-Intelligence-and-Clinical-Trials.pptx
Artificial-Intelligence-and-Clinical-Trials.pptx
 
Emerging technologies final.
Emerging technologies final.Emerging technologies final.
Emerging technologies final.
 
Machine learning in medicine: calm down
Machine learning in medicine: calm downMachine learning in medicine: calm down
Machine learning in medicine: calm down
 
RECOGNITION OF PSYCHOLOGICAL VULNERABILITIES USING MACHINE LEARNING
RECOGNITION OF PSYCHOLOGICAL VULNERABILITIES USING MACHINE LEARNINGRECOGNITION OF PSYCHOLOGICAL VULNERABILITIES USING MACHINE LEARNING
RECOGNITION OF PSYCHOLOGICAL VULNERABILITIES USING MACHINE LEARNING
 
Essence of Soft Computing in Healthcare
Essence of Soft Computing in HealthcareEssence of Soft Computing in Healthcare
Essence of Soft Computing in Healthcare
 
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Recently uploaded (20)

Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 

Framing Trust in Medical AI: Seminar EurAI ACAI

  • 1. JOSE M. JUAREZ AIKE research group, University of Murcia FRAMING TRUST IN MEDICAL AI EURAI ACAI - TAILOR SUMMER SCHOOL 2022 JUN 14TH 2022, BARCELONA Contact: Email: jmjuarez@um.es Twitter: @juarezprofesor LinkedIn: jose-m-juarez
  • 2. 2 TRUST IN MEDICAL AI ? ? ? ? FRAMING TRUST IN MEDICAL AI
  • 3. 3 Unknown technology Ethical or legal Trustworthy FRAMING TRUST IN MEDICAL AI
  • 4. 4 AI-TECHNOLOGY IN HEALTHCARE Unknown technology Ethical or legal Trustworthy Picture from N Melchor UnSplash https://unsplash.com/es/fotos/43LwvC- eQPM?utm_source=unsplash&utm_medium=referral&utm_content=creditShareLink FRAMING TRUST IN MEDICAL AI
  • 5. 5 TRUST IN MEDICAL AI ? ? ? ? FRAMING TRUST IN MEDICAL AI
  • 6. • OUTLINE: 1. HUMANS, HEALTHCARE AND ETHICAL AI 2. AI UNDERSTANDABLE IN HEALTHCARE ENVIRONMENTS 3. REGULATIONS FOR MEDICAL INDUSTRY 4. FINAL 6 FRAMING TRUST IN MEDICAL AI
  • 7. • OUTLINE: 1. HUMANS, HEALTHCARE, ETHICS AND AI 2. AI UNDERSTANDABLE IN HEALTHCARE ENVIRONMENTS 3. REGULATIONS FOR MEDICAL INDUSTRY 4. FINAL 7 FRAMING TRUST IN MEDICAL AI
  • 8. 1. HUMANS, HEALTHCARE ETHICS AND AI – WHY IS HEALTHCARE DOMAIN DIFFERENT? – WHY IS AI DIFFERENT FROM OTHER TOOLS? – WHAT IS MEDICAL AI? – AI ETHICAL CONCERNS & INITIATIVES 8 FRAMING TRUST IN MEDICAL AI
  • 9. 1. HUMANS, HEALTHCARE ETHICS AND AI – WHY IS HEALTHCARE DOMAIN DIFFERENT? – Discipline dependence – Def. Infection CDC – René Laennec 9 FRAMING TRUST IN MEDICAL AI Picture by 8photo from https://www.freepik.es/fotos/doctor-estetoscopio
  • 10. 1. HUMANS, HEALTHCARE ETHICS AND AI – WHY AI IS DIFFERENT FROM A STETHOSCOPE? – Aim – Multidisciplinary 10 FRAMING TRUST IN MEDICAL AI Picture by 8photo from https://www.freepik.es/fotos/doctor-estetoscopio
  • 11. 1. HUMANS, HEALTHCARE, ETHICS AND AI – WHAT IS MEDICAL AI? clinical dataset + ML ?= MEDICAL AI Solve medical problem with AI+ bioethical approval Terminology & clinical semantics Clinical data compilation AI/ML fundamentals + tailored techniques Medical validation 11 FRAMING TRUST IN MEDICAL AI
  • 12. 1. HUMANS, HEALTHCARE, ETHICS AND AI – ETHICS AND AI • ETHICS DEF • HIGH LEVEL PPLES. –IBM’s principles of trust and transparency –Google’s principles on AI –World Economic Forum’s principles for ethical AI –AI4PEOPLE principles and recommendations –IEEE general principles –… 12 FRAMING TRUST IN MEDICAL AI
  • 13. - ETHICS GUIDELINES FOR TRUSTWORTHY AI – 2019 – High-Level Expert Group of AI (indep. group) – Framework for trustworthy AI • Foundations: 4 Ethical principles (address tensions) • Realisation of Trustworthy AI: 7 key requirements (evaluate) • Assessment of Trustworthy AI: assessment List (specific AI apps) 13 Picture from: 2019-11-18 High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 FRAMING TRUST IN MEDICAL AI 1. HUMANS, HEALTHCARE AND ETHICAL AI
  • 14. • ETHICS GUIDELINES FOR TRUSTWORTHY AI – 4 Ethical Principles in the context of AI systems i. Respect to human autonomy ii. Prevention of harm iii. Fairness iv. Explicability 14 FRAMING TRUST IN MEDICAL AI
  • 15. • ETHICS GUIDELINES FOR TRUSTWORTHY AI – Developing Trustworthy AI: 7 requirements i. Human actions and supervision ii. Robustness and safety iii. Privacy and data governance iv. Transparency v. Diversity, non-discrimination and fairness vi. Societal and environmental wellbeing vii. Accountability 15 FRAMING TRUST IN MEDICAL AI
  • 16. – USE CASE 1: Conversational agent during COVID-19 16 FRAMING TRUST IN MEDICAL AI 1. HUMANS, HEALTHCARE AND ETHICAL AI
  • 17. – USE CASE 2: Pneumonia detection from X-rays 17 FRAMING TRUST IN MEDICAL AI 1. HUMANS, HEALTHCARE AND ETHICAL AI
  • 18. • OUTLINE: 1. HUMANS, HEALTHCARE AND ETHICAL AI 2. AI UNDERSTANDABLE IN HEALTHCARE 3. REGULATIONS FOR THE MEDICAL INDUSTRY 4. FINAL 18 FRAMING TRUST IN MEDICAL AI
  • 19. 2. AI UNDERSTANDABLE IN HEALTHCARE – LOST IN TRANSLATION 19 FRAMING TRUST IN MEDICAL AI
  • 20. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME GENERAL CONSENSUS • TRANSPARENT AI: –Expert Rules –Interpretable models –Well known in medical field • BLACK BOX models: –Random forest –XGBoost –ANN and complex architectures (Deep Learning) • Focus on providing EXPLANATIONS 20 FRAMING TRUST IN MEDICAL AI
  • 21. 2. AI UNDERSTANDABLE IN HEALTHCARE – EXPLANATIONS & XAI • Explanation methods –Model Agnostic vs. Dependant –Local vs. Global –Surrogate models –Example based –Visualizations 21 FRAMING TRUST IN MEDICAL AI Picture by Ybonne Doinell at https://christophm.github.io/interpretable-ml-book/
  • 22. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME POPULAR XAI METHODS • LIME[Ribeiro2016] –Model-agnostic –Local –Surrogated –Intuitions 22 FRAMING TRUST IN MEDICAL AI Picture from Figure 3 from [Ribeiro 2016] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016). arXiv:1602.04938
  • 23. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME POPULAR XAI METHODS • LIME[Ribeiro2016] 1. Select instance of interest from dataset 2. Perturb dataset + ● 3. Weight new points 4. Build new ML model 5. Explain prediction 23 FRAMING TRUST IN MEDICAL AI Picture from Figure 3 from [Ribeiro 2016] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016). arXiv:1602.04938
  • 24. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME POPULAR XAI METHODS • SHAPLEY VALUES –Local/Global model-agnostic –ML is a game (coallitions) » Feature = Player » Prediction = Reward » Game = Task of Prediction –Shapley Value: How fairly distribute reward btw features 24 FRAMING TRUST IN MEDICAL AI
  • 25. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME POPULAR XAI METHODS • SHAPLEY VALUES –Given an input, it is the avg marginal contribution of a attribute value across all possible coalitions. – E.g. dataset = attr1 , attr2, attr3 , input X study attr1=vala àcompute all coalitions attr1=vala + attr2=val2a à▲ attr1=vala + attr2=val2b à▽ attr1=vala + attr3=val3a à▲ attr1=vala + attr3=val3b à▽ 25 FRAMING TRUST IN MEDICAL AI
  • 26. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME POPULAR XAI METHODS • SHAPLEY VALUES – Shapley values hard to compute –Estimation of Shapley values (sampling, MonteCarlo) –SHAP, Kernel SHAP, Tree SHAP[LundbergLee2017] 26 FRAMING TRUST IN MEDICAL AI Picture extracted from [Lundberg-Lee2017]Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions. CoRR, abs/1705.07874, 2017. arXiv:1705.07874.
  • 27. 2. AI UNDERSTANDABLE IN HEALTHCARE – SOME POPULAR XAI METHODS • SALIENCY MAPS –Vision: marks region people’s eyes focus first –XAI: highlight pixels relevant for ANN classification 27 FRAMING TRUST IN MEDICAL AI Picture from: [Alqaraawi2020] Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. Evaluating saliency map explanations for convolutional neural networks: a user study.Proceedings of the 25th International Conference on Intelligent User Interfaces . 2020. ACM. arXiv:2002.00772
  • 28. 2. AI UNDERSTANDABLE IN HEALTHCARE – BLACK BOX MEDICINE AND CONCERNS • Clever Hans phenomenon –Generalizable model or spurious correlation 28 FRAMING TRUST IN MEDICAL AI Oskar Pfungst. Clever Hans (The Horse Of Mr. Von Osten). A Contribution To Experimental Animal and Human Psychology. 1911. Picture from the Blog of Responsible AI Microsoft by John Roach https://blogs.microsoft.com/ai/azure-responsible-machine-learning/
  • 29. 2. AI UNDERSTANDABLE IN HEALTHCARE – BLACK BOX MEDICINE AND CONCERNS • Clever Hans in medicine – Researchers from Mount Sinai hospital – High-risk patients detection from x-ray imaging – Applied outside Mount Sinai » Very low rate predictions » Why? 29 FRAMING TRUST IN MEDICAL AI [Amann 2020] Amann, J., Blasimme, A., Vayena, E. et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 20, 310 (2020). https://doi.org/10.1186/s12911-020-01332-6 DICOM
  • 30. 2. AI UNDERSTANDABLE IN HEALTHCARE – BLACK BOX MEDICINE AND CONCERNS • SALIENCY MAPS 30 FRAMING TRUST IN MEDICAL AI Picture from [Gu et al 2019] Gu J, Tresp V. Saliency methods for explaining adversarial attacks. arXiv 2019; published online Aug 22. http://arxiv.org/ abs/1908.08413 (preprint).
  • 31. 2. AI UNDERSTANDABLE IN HEALTHCARE – BLACK BOX MEDICINE AND CONCERNS • ARE EXPLANATIONS UNDERSTANDABLE BY DOCS? LIME SHAP 31 FRAMING TRUST IN MEDICAL AI Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016) Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions. CoRR, abs/1705.07874, 2017. arXiv:1705.07874. Navarro-Nicolas J.A. Explaining Tree SHAP. Undergraduate Thesis. University of Murcia 2021.
  • 32. 2. AI UNDERSTANDABLE IN HEALTHCARE – XAI DETRACTORS IN MEDICAL AI • Final users: not the explanations needed • AI Developpers “debugging” 32 FRAMING TRUST IN MEDICAL AI
  • 33. • OUTLINE: 1. HUMANS, HEALTHCARE AND ETHICAL AI 2. AI UNDERSTANDABLE IN HEALTHCARE 3. REGULATIONS FOR THE MEDICAL INDUSTRY 4. FINAL 33 FRAMING TRUST IN MEDICAL AI
  • 34. 3. REGULATIONS FOR THE MEDICAL INDUSTRY – GDPR sensible data protection – EU initiatives on regulating AI – EU Medical Device Regulation 34 FRAMING TRUST IN MEDICAL AI
  • 35. • GDPR and EXPLAINABILITY CONCERN – General Data Protection Regulation (GDPR) –non directive – Harmonize data privacy laws across Europe – Enforceable / applicable as of May 25th, 2018 – Right of explanations of algorithmic decisions • Section 5 • Controversial, unclear from legal scholars perspective. 35 PICTURE: GDPR https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY
  • 36. • GDPR and EXPLAINABILITY CONCERN – Terms not included in this EU regulation • Explanation right / right of explanation / explainability • Artificial Intelligence • Machine Learning – Terms included in this EU regulation: • Fairness • Transparency • Explanation of the decision • Automated decision-making 36 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY
  • 37. • EU LEGAL FRAMEWORK OF AI – Ongoing since 2018 – AI might create some risk for safety of consumers and fundamental rights. – Risk-based legal framework proposal (April 2021) – Independent of origin of producer or user. 37 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY
  • 38. • EU LEGAL FRAMEWORK OF AI – Level: Unacceptable risk – AI that contradicts EU values is prohibited (Title II, Art. 5) – Subliminal manipulation resulting physical/psych harm – Exploiting vulnerabilities resulting in phy/psy harm – Social scoring by public authorities – Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces 38 Picture: from the presentation made by Salvatore Scalzo, European Commission “Proposal for EU Legal Framework” 2022. FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY
  • 39. • EU LEGAL FRAMEWORK OF AI – Level: High Risk – Safety components of regulated products (e.g. medical devices) – Stand-alone AI-based systems for: • Biometric identification of a person • Operation of critical infrastructures • Educational training • Employment and workers management • Law enforcement • Migration, asylum and border control. • Administration of justice 39 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY
  • 40. 40 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY – USE CASE 1 Impersonation Poor explanation Make decisions affecting life quality No explanation “right not to be subject to a decision based solely on automated processing”
  • 41. 41 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY – MEDICAL INDUSTRY: STANDARDS & REGULATIONS • HOW TO MITIGATE RISKS • FOCUS: quality, safety, effectiveness and performance • US FDA: Title 21 of Code of Federal Regulations • EU: 2017/745 MEDICAL DEVICE REGULATIONS • STANDARDS: – ISO 13485 Medical Devices (quality systems) – ISO 14971 Risk management for medical devices – IEC 62304 Software lifecycle for medical devices – IEC 60601-1 Programmable electrical medical systems – IEC 82304-1 Software as medical devices – Etc.
  • 42. 42 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY – MEDICAL INDUSTRY: STANDARDS & REGULATIONS • REGULATION (EU) 2017/745 on Medical Devices (MDR) – Fully applicable 26/May/2021 – MEDICAL DEVICE DEFINITION (art. II) “Medical Device: any instrument, apparatus, appliance, software, implant, […] for human beings for one or more of the following specific medical purposes: diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease […].”
  • 43. 43 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY – MEDICAL INDUSTRY: STANDARDS & REGULATIONS • REGULATION (EU) 2017/745 on Medical Devices (MDR) SOFTWARE CAN BE MEDICAL DEVICES: – OPERATING SYSTEMS? – MANAGEMENT AND ADMINISTRATIVE SOFWARE? – MEDICAL TRAINING PURPOSES (goal not patients)? – SOFTWARE CONTROLING MEDICAL DEVICE? – ALGORITHM POST PROCESSING BIOSIGNAL? – SOFTWARE POST PROCESSING FOR DATA PREPARATION? – DIAGNOSIS ALGORITHM? – TREATMENT ALGORITHM?
  • 44. 44 FRAMING TRUST IN MEDICAL AI 3. REGULATIONS FOR THE MEDICAL INDUSTRY – MEDICAL INDUSTRY: STANDARDS & REGULATIONS • REGULATION (EU) 2017/745 on Medical Devices (MDR) – USE CASE 1: MEDICAL DEVICE? CLASS? – USE CASE 2: MEDICAL DEVICE? CLASS?
  • 45. • OUTLINE: 1. HUMANS, HEALTHCARE AND ETHICAL AI 2. AI UNDERSTANDABLE IN HEALTHCARE 3. REGULATIONS FOR THE MEDICAL INDUSTRY 4. FINAL 45 FRAMING TRUST IN MEDICAL AI
  • 46. 46 FRAMING TRUST IN MEDICAL AI 4. FINAL Ethical Guidelines Trustworthy AI Lost in translation! XAI LIME, SHAP, Saliency GDPR EU AI initiatives Standards & MDR Medical AI Black Boxes
  • 47. 47 FRAMING TRUST IN MEDICAL AI 4. FINAL TRUST IN MEDICAL AI HUMANS ETHICS HEALTHCARE DOCTORS PATIENTS TECHNOLOGY UNDERSDAND. XAI INDUSTRY REGULATIONS STANDARDS
  • 48. Contact: Jose M. Juarez jmjuarez@um.es FRAMING TRUST IN MEDICAL AI SEMINAR: FRAMING TRUST IN MEDICAL AI
  • 49. REFERENCES & READINGS FRAMING TRUST IN MEDICAL AI Materials & references of the seminar at: https://webs.um.es/jmjuarez/trustAImedicine/