THE HEALTHCARE
EXECUTIVE GUIDE
TO INTELLIGENCE-
BASED MEDICINE
ARLEN MEYERS, MD, MBA
EMERITUS PROFESSOR, UNIVERSITY OF COLORADO
SCHOOL OF MEDICINE
PRESIDENT AND CEO , SOCIETY OF PHYSICIAN
ENTREPRENEURS
ADVIDSOR AND PRINCIPAL, MI10
HYPE OR
HOPE?
WHAT YOU
WILL
LEARN
The basic concepts of artificial intelligence in medicine
The barriers to dissemination and implementation
The hazards of using AI in medicine
Learn how AI is being used in medicine to improve quality,
access, your experience, doctor experience and business
processes at lower costs.
PLEASE DISCUSS
Google maps, Alexa,
customer service chat bots,
some mobile medical apps
and Smart TVs use artificial
intelligence.
What has been your
experience using these?
Concerns?
WHAT IS HEALTHCARE ENTREPRENEURSHIP?
Pursuit of opportunity
Under VUCA (volatile,
uncertain, complex,
ambiguous)
conditions
Goal is to create
stakeholder defined
value
Through the
deployment of
innovation
Using a VAST (viable,
austomatic, scaleable,
time sensitive)
businsess model
WHAT IS DIGITAL HEALTH?
THE APPLICATION OF
INFORMATION AND
COMMUNICATION
TECHNOLOGIES (ICTS) TO
EXCHANGE MEDICAL
INFORMATION FOR VARIOUS
INTENDED USES
• Telehealth
• Electronic medical records
• Big data and analytics
• Remote patient monitoring
• Patient reported outcomes
Virtual and augmented reality
Blockchain
Artificial intelligence
Mobile medical apps
Digital therapeutics
WHAT IS ARTIFICIAL INTELLIGENCE?
• There is no universal definition of artificial intelligence (AI). AI is generally
considered to be a discipline of computer science that is aimed at developing
machines and systems that can carry out tasks considered to require human
intelligence. Machine learning and deep learning are two subsets of AI. In
recent years, with the development of new neural networks techniques and
hardware, AI is usually perceived as a synonym for “deep supervised
machine learning”.
WHAT IS MACHINE LEARNING
• Machine learning uses examples of input and expected output (so called
“structured data” or “training data”), in order to continually improve and make
decisions without being programmed how to do so in a step-by-step
sequence of instructions. This approach mimics actual biological cognition: a
child learns to recognize objects (such as cups) from examples of the same
objects (such as various kinds of cups). Today application of machine
learning are widespread including email spam filtering, machine translation,
voice, text and image recognition.
WHAT IS DEEP LEARNING?
• Deep learning has evolved from machine learning. Deep learning uses a
plurality of AI algorithms (so called “artificial neural networks”) to recognize
patterns, hence being able to group and classify unlabeled data.
APPLICATIONS
• Future clinical evidence generation with real
world data
• Patient-centered care with wearable
technology with embedded AI
• Advanced automated processes with robotic
process automation
• Continual learning health system with edge
computing
• Widespread citizen medicine with artificial
intelligence tools
• Innovative electronic health record with graph
database
• Real-time decision support with deep
reinforcement learning
• Intelligent reality with AI and virtual reality
• Future clinical research with AI and virtual
twins
• Advanced communications with transformer
type natural language processing
BARRIERS TO AI DISSEMINATION AND
IMPLEMENTATION
There are four basic
categories of barriers:
1) technical 2) human factors
3) environmental, including
legal, regulatory, ethical,
political, societal and
economic determinants and
4) business model barriers
to entry.
HAZARDS
AND
LANDMINE
S
•Trust
•The Black Box problem:
transparency
•Bias
•Regulatory, economic,
ethical and societal
issues
TRUST STANDARDS
• Known as ANSI/CTA-2090, "The Use of Artificial Intelligence in Health
Care: Trustworthiness" – considers what the association says are the
three key areas relating to how trust is created and maintained across
stakeholders.
HUMAN TRUST
• Human trust is concerned with developing "humanistic factors that
affect the creation and maintenance of trust between the developer
and users," according to CTA. "Specifically, human trust is built upon
human interaction, the ability to easily explain, user experience and
levels of autonomy of the AI solution."
TECHNICAL TRUST
• Technical trust is focused on design and training of AI and machine
learning systems, ensuring they "deliver results as expected."
Additionally, it considers data quality and integrity – such as
algorithmic bias, security, privacy, source and access.
REGULATORY TRUST
• Regulatory trust, meanwhile, is "gained through compliance by
industry based upon clear laws and regulations," said CTA, whether
that's from accreditation boards, regulatory agencies, federal and
state laws or international standardization frameworks.
BUT, WHAT ABOUT CYBERSECURITY?
PLEASE DISCUSS
What are your concerns
about the ethical use of AI
in medicine?
Suppose you were treated
by a doctor who used AI to
make a diagnosis which
turned out to be wrong.
What would you do?
WHO ETHICAL PRINCIPLES
Protecting human autonomy: In the context of health care, this means
that humans should remain in control of healthcare systems and
medical decisions; privacy and confidentiality should be protected, and
patients must give valid informed consent through appropriate legal
frameworks for data protection.
WHO PRINCIPLES
• Promoting human well-being and safety and the public
interest. The designers of AI technologies should satisfy regulatory
requirements for safety, accuracy and efficacy for well-defined use
cases or indications. Measures of quality control in practice and
quality improvement in the use of AI must be available.
WHO PRINCIPLES
• Ensuring transparency, explainability and
intelligibility. Transparency requires that sufficient information be
published or documented before the design or deployment of an AI
technology. Such information must be easily accessible and facilitate
meaningful public consultation and debate on how the technology is
designed and how it should or should not be used.
WHO PRINCIPLES
• Fostering responsibility and accountability. Although AI
technologies perform specific tasks, it is the responsibility of
stakeholders to ensure that they are used under appropriate
conditions and by appropriately trained people. Effective mechanisms
should be available for questioning and for redress for individuals and
groups that are adversely affected by decisions based on algorithms.
WHO PRINCIPLES
• Ensuring inclusiveness and equity. Inclusiveness requires that AI
for health be designed to encourage the widest possible equitable use
and access, irrespective of age, sex, gender, income, race, ethnicity,
sexual orientation, ability or other characteristics protected under
human rights codes.
WHO PRINCIPLES
• Promoting AI that is responsive and sustainable. Designers, developers
and users should continuously and transparently assess AI applications
during actual use to determine whether AI responds adequately and
appropriately to expectations and requirements. AI systems should also be
designed to minimize their environmental consequences and increase
energy efficiency. Governments and companies should address anticipated
disruptions in the workplace, including training for health-care workers to
adapt to the use of AI systems, and potential job losses due to use of
automated systems.
USE CASES IN MEDICINE
Radiology and imaging Clinical decision support Personalized medicine Robotic process
automation
Patient education,
experience, engagement
and behavior change
enablement
AI CHALLENGES IN HEALTHCARE
• Addressing the ethical, legal, and societal implications of AI
• Ensuring the safety and security of AI systems
• Developing shared public datasets and environments for AI training and
testing
• Evaluating AI technologies using standards and benchmarks
• Understanding the national AI R&D workforce needs
• Expanding public-private partnerships to accelerate advances in AI
DIGITAL TRANSFORMATION AND
DEXTERITY
• CULTURE
• PROCESSES
• TECHNOLOGY
• PEOPLE
• LEADERSHIP
• ALIGNMENT
• EDUCATION AND TRAINING
10 COMPUTER VISION APPLICATIONS
APPLICATIONS
• Blood loss measurement
• Cardiology ultrasound/ekg
• Tumor detection
• Cancer detection
• Illness prediction
APPLICATIONS
• AR/VR in surgery
• COVID staff. Stuff and space
• Voice diagnosis
• Retinal scanning
• Melanoma detection
VOCALISCHECK
• AI-based vocal biomarker company Vocalis Health received a CE
mark for its Covid-19 'voice check' tool. VocalisCheck is a software-
only product that collects a single voice sample from users simply by
counting from 50 to 70. The recording is transformed to a picture
(spectrogram) containing 512 features which are analysed by AI
machine learning. The tool was recently validated in a large clinical
study demonstrating an accuracy sensitivity and specificity above
80%, even among asymptomatic cohorts
IMAGE DETECTION OF SKIN CANCER
• AI can be taught to flag possible skin cancers on photos taken with
smartphone cameras—and the images can be ordinary “people shots”
rather than closeups of suspicious lesions. Using lesion classification
by experienced dermatologists as the ground truth, the researchers
found their system achieved sensitivity and specificity of right around
90% each when tasked with separating suspicious lesions from
benign skin discolorations and busy backgrounds.
USING METADATA
• https://www.youtube.com/watch?v=
LH2F8PNJdu0
What do I do?
PLEASE DISCUSS
• Will the use of artificial intelligence in medicine eliminate
doctors or other healthcare professionals?
• Why or why not? Would you want it to?
• How will it impact the future of medical education and
workforce needs?
• Will AI widen the gaps between the haves and have-nots?
PATIENT AI BILL OF RIGHTS
• Transparency: We have the right to know when an algorithm is making a
decision about us, which factors are being considered by the algorithm, and
how those factors are being weighted.
• Explanation: We have the right to be given explanations about how
algorithms affect us in a specific situation, and these explanations should be
clear enough that the average person will be able to understand them.
• https://www.vox.com/the-highlight/2019/5/22/18273284/ai-
algorithmic-bill-of-rights-accountability-transparency-consent-bias
BILL OF RIGHTS
• Consent: We have the right to give or refuse consent for any AI
application that has a material impact on our lives or uses sensitive
data, such as biometric data.
• Freedom from bias: We have the right to evidence showing that
algorithms have been tested for bias related to race, gender, and
other protected characteristics — before they’re rolled out. The
algorithms must meet standards of fairness and nondiscrimination and
ensure just outcomes.
BILL OF RIGHTS
• Feedback mechanism: We have the right to exert some degree of
control over the way algorithms work.
• Portability: We have the right to easily transfer all our data from one
provider to another.
• Redress: We have the right to seek redress if we believe an
algorithmic system has unfairly penalized or harmed us.
BILL OF RIGHTS
• Algorithmic literacy: We have the right to free educational resources about
algorithmic systems.
• Independent oversight: We have the right to expect that an independent
oversight body will be appointed to conduct retrospective reviews of
algorithmic systems gone wrong. The results of these investigations should
be made public.
• Federal and global governance: We have the right to robust federal and
global governance structures with human rights at their center. Algorithmic
systems don’t stop at national borders, and they are increasingly used to
decide who gets to cross borders, making international governance crucial.
7 LESSONS
• #1 AI investment has weathered Covid disruption
• #2 AI is in a process of democratization
• #3 New initiatives have high-level endorsement from C-Suite
• #4 Customer service will remain one of the key applications
• #5 Some use AI use cases are finding homes on other hype cycles
• #6 Gartner recommends a focus on narrow use cases
• #7 The AI hype cycle makes it impossible to pick winners or losers
• https://www.babelforce.com/blog/perspectives/7-lessons-from-gartners-ai-hype-cycle-2020/
WANT TO LEARN MORE?
• Free online introduction to artificial intelligence
• https://www.elementsofai.com/
• www.abaim.org
• www.misociety.org
• www.MI10.ai
• www.AI-med.io
THANK YOU

What healthcare executives should know about artificial intelligence

  • 1.
    THE HEALTHCARE EXECUTIVE GUIDE TOINTELLIGENCE- BASED MEDICINE ARLEN MEYERS, MD, MBA EMERITUS PROFESSOR, UNIVERSITY OF COLORADO SCHOOL OF MEDICINE PRESIDENT AND CEO , SOCIETY OF PHYSICIAN ENTREPRENEURS ADVIDSOR AND PRINCIPAL, MI10
  • 2.
  • 6.
    WHAT YOU WILL LEARN The basicconcepts of artificial intelligence in medicine The barriers to dissemination and implementation The hazards of using AI in medicine Learn how AI is being used in medicine to improve quality, access, your experience, doctor experience and business processes at lower costs.
  • 7.
    PLEASE DISCUSS Google maps,Alexa, customer service chat bots, some mobile medical apps and Smart TVs use artificial intelligence. What has been your experience using these? Concerns?
  • 8.
    WHAT IS HEALTHCAREENTREPRENEURSHIP? Pursuit of opportunity Under VUCA (volatile, uncertain, complex, ambiguous) conditions Goal is to create stakeholder defined value Through the deployment of innovation Using a VAST (viable, austomatic, scaleable, time sensitive) businsess model
  • 9.
    WHAT IS DIGITALHEALTH? THE APPLICATION OF INFORMATION AND COMMUNICATION TECHNOLOGIES (ICTS) TO EXCHANGE MEDICAL INFORMATION FOR VARIOUS INTENDED USES • Telehealth • Electronic medical records • Big data and analytics • Remote patient monitoring • Patient reported outcomes Virtual and augmented reality Blockchain Artificial intelligence Mobile medical apps Digital therapeutics
  • 10.
    WHAT IS ARTIFICIALINTELLIGENCE? • There is no universal definition of artificial intelligence (AI). AI is generally considered to be a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence. Machine learning and deep learning are two subsets of AI. In recent years, with the development of new neural networks techniques and hardware, AI is usually perceived as a synonym for “deep supervised machine learning”.
  • 11.
    WHAT IS MACHINELEARNING • Machine learning uses examples of input and expected output (so called “structured data” or “training data”), in order to continually improve and make decisions without being programmed how to do so in a step-by-step sequence of instructions. This approach mimics actual biological cognition: a child learns to recognize objects (such as cups) from examples of the same objects (such as various kinds of cups). Today application of machine learning are widespread including email spam filtering, machine translation, voice, text and image recognition.
  • 12.
    WHAT IS DEEPLEARNING? • Deep learning has evolved from machine learning. Deep learning uses a plurality of AI algorithms (so called “artificial neural networks”) to recognize patterns, hence being able to group and classify unlabeled data.
  • 14.
    APPLICATIONS • Future clinicalevidence generation with real world data • Patient-centered care with wearable technology with embedded AI • Advanced automated processes with robotic process automation • Continual learning health system with edge computing • Widespread citizen medicine with artificial intelligence tools • Innovative electronic health record with graph database • Real-time decision support with deep reinforcement learning • Intelligent reality with AI and virtual reality • Future clinical research with AI and virtual twins • Advanced communications with transformer type natural language processing
  • 15.
    BARRIERS TO AIDISSEMINATION AND IMPLEMENTATION There are four basic categories of barriers: 1) technical 2) human factors 3) environmental, including legal, regulatory, ethical, political, societal and economic determinants and 4) business model barriers to entry.
  • 16.
    HAZARDS AND LANDMINE S •Trust •The Black Boxproblem: transparency •Bias •Regulatory, economic, ethical and societal issues
  • 18.
    TRUST STANDARDS • Knownas ANSI/CTA-2090, "The Use of Artificial Intelligence in Health Care: Trustworthiness" – considers what the association says are the three key areas relating to how trust is created and maintained across stakeholders.
  • 19.
    HUMAN TRUST • Humantrust is concerned with developing "humanistic factors that affect the creation and maintenance of trust between the developer and users," according to CTA. "Specifically, human trust is built upon human interaction, the ability to easily explain, user experience and levels of autonomy of the AI solution."
  • 20.
    TECHNICAL TRUST • Technicaltrust is focused on design and training of AI and machine learning systems, ensuring they "deliver results as expected." Additionally, it considers data quality and integrity – such as algorithmic bias, security, privacy, source and access.
  • 21.
    REGULATORY TRUST • Regulatorytrust, meanwhile, is "gained through compliance by industry based upon clear laws and regulations," said CTA, whether that's from accreditation boards, regulatory agencies, federal and state laws or international standardization frameworks.
  • 23.
    BUT, WHAT ABOUTCYBERSECURITY?
  • 24.
    PLEASE DISCUSS What areyour concerns about the ethical use of AI in medicine? Suppose you were treated by a doctor who used AI to make a diagnosis which turned out to be wrong. What would you do?
  • 25.
    WHO ETHICAL PRINCIPLES Protectinghuman autonomy: In the context of health care, this means that humans should remain in control of healthcare systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.
  • 26.
    WHO PRINCIPLES • Promotinghuman well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.
  • 27.
    WHO PRINCIPLES • Ensuringtransparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
  • 28.
    WHO PRINCIPLES • Fosteringresponsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
  • 29.
    WHO PRINCIPLES • Ensuringinclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.
  • 30.
    WHO PRINCIPLES • PromotingAI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.
  • 31.
    USE CASES INMEDICINE Radiology and imaging Clinical decision support Personalized medicine Robotic process automation Patient education, experience, engagement and behavior change enablement
  • 32.
    AI CHALLENGES INHEALTHCARE • Addressing the ethical, legal, and societal implications of AI • Ensuring the safety and security of AI systems • Developing shared public datasets and environments for AI training and testing • Evaluating AI technologies using standards and benchmarks • Understanding the national AI R&D workforce needs • Expanding public-private partnerships to accelerate advances in AI
  • 33.
    DIGITAL TRANSFORMATION AND DEXTERITY •CULTURE • PROCESSES • TECHNOLOGY • PEOPLE • LEADERSHIP • ALIGNMENT • EDUCATION AND TRAINING
  • 35.
    10 COMPUTER VISIONAPPLICATIONS APPLICATIONS • Blood loss measurement • Cardiology ultrasound/ekg • Tumor detection • Cancer detection • Illness prediction APPLICATIONS • AR/VR in surgery • COVID staff. Stuff and space • Voice diagnosis • Retinal scanning • Melanoma detection
  • 36.
    VOCALISCHECK • AI-based vocalbiomarker company Vocalis Health received a CE mark for its Covid-19 'voice check' tool. VocalisCheck is a software- only product that collects a single voice sample from users simply by counting from 50 to 70. The recording is transformed to a picture (spectrogram) containing 512 features which are analysed by AI machine learning. The tool was recently validated in a large clinical study demonstrating an accuracy sensitivity and specificity above 80%, even among asymptomatic cohorts
  • 37.
    IMAGE DETECTION OFSKIN CANCER • AI can be taught to flag possible skin cancers on photos taken with smartphone cameras—and the images can be ordinary “people shots” rather than closeups of suspicious lesions. Using lesion classification by experienced dermatologists as the ground truth, the researchers found their system achieved sensitivity and specificity of right around 90% each when tasked with separating suspicious lesions from benign skin discolorations and busy backgrounds.
  • 38.
  • 39.
    PLEASE DISCUSS • Willthe use of artificial intelligence in medicine eliminate doctors or other healthcare professionals? • Why or why not? Would you want it to? • How will it impact the future of medical education and workforce needs? • Will AI widen the gaps between the haves and have-nots?
  • 40.
    PATIENT AI BILLOF RIGHTS • Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted. • Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them. • https://www.vox.com/the-highlight/2019/5/22/18273284/ai- algorithmic-bill-of-rights-accountability-transparency-consent-bias
  • 41.
    BILL OF RIGHTS •Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data. • Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes.
  • 42.
    BILL OF RIGHTS •Feedback mechanism: We have the right to exert some degree of control over the way algorithms work. • Portability: We have the right to easily transfer all our data from one provider to another. • Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  • 43.
    BILL OF RIGHTS •Algorithmic literacy: We have the right to free educational resources about algorithmic systems. • Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public. • Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.
  • 44.
    7 LESSONS • #1AI investment has weathered Covid disruption • #2 AI is in a process of democratization • #3 New initiatives have high-level endorsement from C-Suite • #4 Customer service will remain one of the key applications • #5 Some use AI use cases are finding homes on other hype cycles • #6 Gartner recommends a focus on narrow use cases • #7 The AI hype cycle makes it impossible to pick winners or losers • https://www.babelforce.com/blog/perspectives/7-lessons-from-gartners-ai-hype-cycle-2020/
  • 45.
    WANT TO LEARNMORE? • Free online introduction to artificial intelligence • https://www.elementsofai.com/ • www.abaim.org • www.misociety.org • www.MI10.ai • www.AI-med.io
  • 46.