Ethics of Generative AI in Healthcare
An Overview of
Core Ethical Principles, Challenges, and Governance
Vaikunthan Rajaratnam
MBBS,FRCS(Ed),MBA,MIDT, PhD(Education).
Senior Consultant, Hand & Reconstructive Microsurgery, KTPH, Singapore
Program Director Instructional Design for Healthcare & AI in Healthcare NHG, Singapore
Adjunct Professor/UNESCO Chair Partner, Asia Pacific University of Technology and Innovation, Malaysia
Honorary Professor, De Montfort University, UK
Honorary Professor, RCSI & UCD Malaysia
Research Fellow, Centre for Artificial Intelligence and Data Analytics, Binary University (Malaysia)
Disclaimer
– I am not an AI expert, nor do I possess coding knowledge specific to
the underlying mechanisms of AI models;
– My expertise lies in the utilisation of these models, such as ChatGPT,
based on my extensive experience as a user within the fields of
healthcare, medical education, and related research, rather than their
technical development or underlying algorithms.
– This workshop is intended solely for educational and informational
purposes in AI and healthcare.
– The views expressed herein are my own, borne from extensive
experience in surgery, medical education, and instructional design,
and do not necessarily reflect those of any associated institutions.
– While I endeavour to provide accurate and up-to-date information, no
guarantee is given regarding its applicability.
– Participants acknowledge and assume responsibility for using the
information provided by engaging in this workshop.
Vaikunthan Rajaratnam
Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
(Rajaratnam, 2024)
(Rajaratnam, 2024)
Introduction
• Diagnostics
• Treatment
• Data analysis.
Overview of
Generative
AI's role
• Address challenges
• Potential impacts.
Ethical
framework
• AI-driven healthcare.
Bioethical
principles
(Rajaratnam, 2024)Asian Hospital & Healthcare Management - Issue 65, p 43)
(Rajaratnam, 2024)Asian Hospital & Healthcare Management - Issue 65, p 44)
Beneficence:
• Maximise AI’s
benefits for patient
outcomes and
quality of care.
Nonmaleficence:
• Avoid harm with
robust, transparent
algorithms.
Autonomy:
• Ensure patient
decisions are
respected with AI-
driven decisions.
Justice:
• Address biases to
ensure equitable
access to AI
benefits.
Core Ethical Principles in AI for Healthcare
• At all stages of
approval,
implementation
and evaluation
•Stakeholder education
•Informed consent
•Appropriate and
authorised used of
data
•AI decision making
model
•Patient and clinician
autonomy
• Data Governance
Panels
• Design AI and
Technology
grounded on
social justice
FAIRNESS TRANSPARENCY
ACCOUNTABILITY
TRUSTWORTHINESS
Figure 10: Governance model for A.I. in Healthcare
Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
Figure 9: Process for shared ethical decision-making in healthcare
Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
Short description of
the components of
shared decision-
making
Expanded descriptions of the
components of what is required of
doctors in shared decision-making.
These can be perceived as minimum
standards
How AI can undermine the conditions for shared
decision-making
(a) Understanding
the patient’s
condition
Doctors must understand the
connection between patients’
conditions and the need for potential
interventions on a general, technical,
and normative level and translate them
into individual patients' particular
contexts.
If the clinical outcome of A.I. is beyond what doctors are
able to understand themselves, their clinical
competence is undermined, and by that, a crucial
presupposition for why the patients have reason to
trust them in the first place [38].
(b) Trust in evidence Doctors must base their decisions on
sources of evidence they trust to ensure
the information is relevant and
adequate.
If doctors suggest treatments on the basis of AI sources
to information they cannot fully account for, they force
patients to place blind trust in their recommendations.
This is just another version of paternalism.
(c) Due assessment
of benefits and risks
Doctors must understand all relevant
information of benefits and risks and
trade-offs between them.
If doctors cannot fully understand how and why A.I. has
reached an outcome, say, the classification of an x-ray,
uncertainty regarding assessments of risks, benefits and
trade-offs will follow. This, in turn, undermines
’patients’ reasons to have confidence in their judgments
as their role as the expert in the relation.
(d) Accommodating
” ’patient’s
understanding,
communication,
and deliberation
Doctors must convey an assessment of risks
and benefits to patients in a clear and
accessible manner, ensure they have
understood the information, and invite them to
share their thoughts and deliberate together
on the matter.
If AI systems make it hard for doctors to understand
how and why they reach their outcomes, they cannot
facilitate patients' understanding. Instead, they will
have to paternalistically require that the patient accept
that the A.I. ‘knows best’.
Rahman, N., Thamotharampillai, T., &
Rajaratnam, V. (2024).
HUMAN Review AI Output
•Critically assess
•Relevance
•Plausibility
Key Term Extraction
• Extract central terms
Strategic Search in
Academic Databases
•Focused search
•Using Extracted terms
•Employing advanced
search techniques
Literature Screening
•Select relevant scholarly articles
•Prioritising high-quality, peer-
reviewed sources
•In-depth analysis.
Evidence-Based
Validation
•Critically evaluate the
literature
•Corroborate or challenge
the AI-generated
information
•Look for consistency and
consensus
Synthesis and
Documentation
•Compile and synthesise
findings
•Validate the AI’s responses
•Document and cite the
sources
Evidence-based
Validation
Domains of analysis in a sociotechnical approach to digital health ethics. Adapted from J. A. Shaw and J.
Donia, ‘The Sociotechnical Ethics of Digital Health: A Critique and Extension of Approaches From Bioethics’,
Front. Digit. Health, vol. 3, 2021.
Transformative
and disruptive
technology
Ethical
attention
Division of
labour
Benefits Challenges
Ordinary
evidence
model
No Mainly in
use, not
design
Distinct Fits well with
widely held notions
of the professional
responsibility
Lacks proper response
to challenges pertaining
to algorithmic risk,
transparency,
and accountability
Ethical design
model
Yes Mainly in
design, not
in use
Distinct Takes the distinct
ethical challenges
of medical AI
seriously
Technocratic view on
ethical choices and the
problem of formalising
ethics
Collaborative
model
Yes Both in
design and
use
Integrated Alleviates some of
the accountability
problems and
promotes shared
decision-making
No proper response to
severe ethical risks
Public
deliberation
model
Yes Both in
design and
use and in
the public
sphere
Partly
distinct,
partly
integrated
Can deal with
“meta-ethical
risks.”
The models need more
organisational
specification
Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
• Context-dependent ethics (relativism ) vs universal
moral truths (objectivism).
• Inconsistent ethical standards across cultures
Moral Relativism vs.
Objectivism
• Human ethical values vs AI systems or policies.
• Ethically misaligned decisions.
Value Alignment
• Oversimplification of complex ethical issues .
• Example: Encoding fairness may lead to biases
Ethical Reductionism
• Uncertainty of ethical principles within autonomous
AI systems.
• Accountability risk and loss of human control.
Existential Ethical
Uncertainty
• Compliance tool vs core guiding principle.
• Superficial adherence to ethics.
Instrumentalisation
of Ethics
META ETHICAL RISKS
(Rajaratnam, 2024 Asian Hospital & Healthcare Management - Issue 65, p 44)
AIHP
WORK
SHOPS
AMBIENT
CLINICAL
RECORDI-
NG
MY
USE CASE
(Rajaratnam, 2024, Asian Hospital & Healthcare Management - Issue 65, p 44)
Ethical
Governance
Models:
Integrating
healthcare
professionals
and
bioethicists.
Standardisation
: Establishing
guidelines for AI
use.
Ethics in AI
Design:
Accommodate
patient values.
Policy and
Monitoring:
Continuous
updates to
match AI
advancements.
Governance
and Regulatory
Considerations
Anticipatory Governance:
Monitoring AI’s societal impacts.
Addressing Health Inequities
Bridging healthcare gaps.
Case Studies: Practical examples .
Multidisciplinary dialogue and
policy advocacy.
Sociotechnical
Ethics and
Future
Directions
(Rajaratnam, 2024)Asian Hospital & Healthcare Management - Issue 65, p 45)
Conclusion
Collaboration,
research, and
policy in ethical
AI development.
Contact
vaikunthan@gmail.com

Ethics_of_Generative_AI_in_Healthcare.pdf

  • 1.
    Ethics of GenerativeAI in Healthcare An Overview of Core Ethical Principles, Challenges, and Governance Vaikunthan Rajaratnam MBBS,FRCS(Ed),MBA,MIDT, PhD(Education). Senior Consultant, Hand & Reconstructive Microsurgery, KTPH, Singapore Program Director Instructional Design for Healthcare & AI in Healthcare NHG, Singapore Adjunct Professor/UNESCO Chair Partner, Asia Pacific University of Technology and Innovation, Malaysia Honorary Professor, De Montfort University, UK Honorary Professor, RCSI & UCD Malaysia Research Fellow, Centre for Artificial Intelligence and Data Analytics, Binary University (Malaysia)
  • 2.
    Disclaimer – I amnot an AI expert, nor do I possess coding knowledge specific to the underlying mechanisms of AI models; – My expertise lies in the utilisation of these models, such as ChatGPT, based on my extensive experience as a user within the fields of healthcare, medical education, and related research, rather than their technical development or underlying algorithms. – This workshop is intended solely for educational and informational purposes in AI and healthcare. – The views expressed herein are my own, borne from extensive experience in surgery, medical education, and instructional design, and do not necessarily reflect those of any associated institutions. – While I endeavour to provide accurate and up-to-date information, no guarantee is given regarding its applicability. – Participants acknowledge and assume responsibility for using the information provided by engaging in this workshop. Vaikunthan Rajaratnam
  • 3.
    Rahman, N., Thamotharampillai,T., & Rajaratnam, V. (2024).
  • 4.
  • 5.
  • 6.
    Introduction • Diagnostics • Treatment •Data analysis. Overview of Generative AI's role • Address challenges • Potential impacts. Ethical framework • AI-driven healthcare. Bioethical principles
  • 7.
    (Rajaratnam, 2024)Asian Hospital& Healthcare Management - Issue 65, p 43)
  • 8.
    (Rajaratnam, 2024)Asian Hospital& Healthcare Management - Issue 65, p 44)
  • 9.
    Beneficence: • Maximise AI’s benefitsfor patient outcomes and quality of care. Nonmaleficence: • Avoid harm with robust, transparent algorithms. Autonomy: • Ensure patient decisions are respected with AI- driven decisions. Justice: • Address biases to ensure equitable access to AI benefits. Core Ethical Principles in AI for Healthcare
  • 10.
    • At allstages of approval, implementation and evaluation •Stakeholder education •Informed consent •Appropriate and authorised used of data •AI decision making model •Patient and clinician autonomy • Data Governance Panels • Design AI and Technology grounded on social justice FAIRNESS TRANSPARENCY ACCOUNTABILITY TRUSTWORTHINESS Figure 10: Governance model for A.I. in Healthcare Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
  • 11.
    Figure 9: Processfor shared ethical decision-making in healthcare Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
  • 12.
    Short description of thecomponents of shared decision- making Expanded descriptions of the components of what is required of doctors in shared decision-making. These can be perceived as minimum standards How AI can undermine the conditions for shared decision-making (a) Understanding the patient’s condition Doctors must understand the connection between patients’ conditions and the need for potential interventions on a general, technical, and normative level and translate them into individual patients' particular contexts. If the clinical outcome of A.I. is beyond what doctors are able to understand themselves, their clinical competence is undermined, and by that, a crucial presupposition for why the patients have reason to trust them in the first place [38]. (b) Trust in evidence Doctors must base their decisions on sources of evidence they trust to ensure the information is relevant and adequate. If doctors suggest treatments on the basis of AI sources to information they cannot fully account for, they force patients to place blind trust in their recommendations. This is just another version of paternalism. (c) Due assessment of benefits and risks Doctors must understand all relevant information of benefits and risks and trade-offs between them. If doctors cannot fully understand how and why A.I. has reached an outcome, say, the classification of an x-ray, uncertainty regarding assessments of risks, benefits and trade-offs will follow. This, in turn, undermines ’patients’ reasons to have confidence in their judgments as their role as the expert in the relation. (d) Accommodating ” ’patient’s understanding, communication, and deliberation Doctors must convey an assessment of risks and benefits to patients in a clear and accessible manner, ensure they have understood the information, and invite them to share their thoughts and deliberate together on the matter. If AI systems make it hard for doctors to understand how and why they reach their outcomes, they cannot facilitate patients' understanding. Instead, they will have to paternalistically require that the patient accept that the A.I. ‘knows best’. Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
  • 13.
    HUMAN Review AIOutput •Critically assess •Relevance •Plausibility Key Term Extraction • Extract central terms Strategic Search in Academic Databases •Focused search •Using Extracted terms •Employing advanced search techniques Literature Screening •Select relevant scholarly articles •Prioritising high-quality, peer- reviewed sources •In-depth analysis. Evidence-Based Validation •Critically evaluate the literature •Corroborate or challenge the AI-generated information •Look for consistency and consensus Synthesis and Documentation •Compile and synthesise findings •Validate the AI’s responses •Document and cite the sources Evidence-based Validation
  • 14.
    Domains of analysisin a sociotechnical approach to digital health ethics. Adapted from J. A. Shaw and J. Donia, ‘The Sociotechnical Ethics of Digital Health: A Critique and Extension of Approaches From Bioethics’, Front. Digit. Health, vol. 3, 2021.
  • 15.
    Transformative and disruptive technology Ethical attention Division of labour BenefitsChallenges Ordinary evidence model No Mainly in use, not design Distinct Fits well with widely held notions of the professional responsibility Lacks proper response to challenges pertaining to algorithmic risk, transparency, and accountability Ethical design model Yes Mainly in design, not in use Distinct Takes the distinct ethical challenges of medical AI seriously Technocratic view on ethical choices and the problem of formalising ethics Collaborative model Yes Both in design and use Integrated Alleviates some of the accountability problems and promotes shared decision-making No proper response to severe ethical risks Public deliberation model Yes Both in design and use and in the public sphere Partly distinct, partly integrated Can deal with “meta-ethical risks.” The models need more organisational specification Rahman, N., Thamotharampillai, T., & Rajaratnam, V. (2024).
  • 16.
    • Context-dependent ethics(relativism ) vs universal moral truths (objectivism). • Inconsistent ethical standards across cultures Moral Relativism vs. Objectivism • Human ethical values vs AI systems or policies. • Ethically misaligned decisions. Value Alignment • Oversimplification of complex ethical issues . • Example: Encoding fairness may lead to biases Ethical Reductionism • Uncertainty of ethical principles within autonomous AI systems. • Accountability risk and loss of human control. Existential Ethical Uncertainty • Compliance tool vs core guiding principle. • Superficial adherence to ethics. Instrumentalisation of Ethics META ETHICAL RISKS
  • 17.
    (Rajaratnam, 2024 AsianHospital & Healthcare Management - Issue 65, p 44) AIHP WORK SHOPS AMBIENT CLINICAL RECORDI- NG MY USE CASE
  • 18.
    (Rajaratnam, 2024, AsianHospital & Healthcare Management - Issue 65, p 44)
  • 19.
    Ethical Governance Models: Integrating healthcare professionals and bioethicists. Standardisation : Establishing guidelines forAI use. Ethics in AI Design: Accommodate patient values. Policy and Monitoring: Continuous updates to match AI advancements. Governance and Regulatory Considerations
  • 20.
    Anticipatory Governance: Monitoring AI’ssocietal impacts. Addressing Health Inequities Bridging healthcare gaps. Case Studies: Practical examples . Multidisciplinary dialogue and policy advocacy. Sociotechnical Ethics and Future Directions
  • 21.
    (Rajaratnam, 2024)Asian Hospital& Healthcare Management - Issue 65, p 45)
  • 22.
  • 23.