SlideShare a Scribd company logo
1 of 129
Therapist AI
& ChatGPT:
How to Use
Legally &
Ethically
Joseph P. McMenamin, MD, JD, FCLM
Joe McMenamin is a partner at Christian & Barton in
Richmond, Virginia. His practice concentrates on digital health
and on the application of AI in healthcare.
He is an Associate Professor of Legal Medicine at Virginia
Commonwealth University and Board-certified in Legal
Medicine.
Marlene M. Maheu, PhD
Marlene Maheu, PhD has been a pioneer in telemental health
for three decades.
With five textbooks, dozens of book chapters, and journal
articles to her name, she is the Founder and CEO of the
Telebehavioral Health Institute (TBHI).
She is the CEO of the Coalition for Technology in Behavioral
Science (CTiBS), and the Founder of the Journal for
Technology in Behavioral Science.
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
4
And you? Please introduce
yourself with city and
specialty 
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Participants will be able to outline an array of legal and
ethical issues implicated by the use of therapist AI and
ChatGPT.
• Name the primary reason ChatGPT is not likely to replace
psychotherapists in our lifetimes.
• Outline how to best minimize therapist AI and ChatGPT
ethical risks today.
Learning
Objectives
5
Preventing
Interruptions
Maximize your learning by:
• Making a to-do list as we go.
• Turning on your camera & join the
conversation throughout this
activity.
• Muting your phone.
• Asking family and friends to stay
away.
We will not be discussing all slides.
• Mr. McMenamin speaks neither for any legal client nor for Telehealth.org
• Is neither a technical expert nor an Intellectual Property lawyer.
• Offers information about the law, not legal advice.
• Labors under a dearth of legal authorities specific to AI.
Speaker Disclaimers
• Must treat some subjects in cursory fashion only.
• Presents theories of liability as illustrations, conceding nothing as to their
validity.
• Criticizes no person or entity, nor AI.
• In this presentation, neither creates nor seeks to create an attorney-client
relationship with any member of the audience.
Speaker Disclaimers
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
What Uses Can Mental
Health Professionals
Make of AI?
9
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
If you have begun
or are considering
using AI or
ChatGPT in your
work, please
outline those
activities in the
chat box.
10
We will proceed with the presentation while
you do so, then we will come back later.
What are AI and ChatGPT? ?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Three Primary Areas:
1. Information Retrieval and Research
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
3. Client & Patient Education
How are AI & ChatGPT being
used to help healthcare practices?
13
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Programs like Elicit and Claude can provide advanced
research capabilities that exceed traditional methods.
methods.
• For example, AI at Elicit can extract information from up to 100
papers and present the information in a structured table.
• It can find scientific papers on a question or topic and organize
the data collected into a table.
• It can also discover concepts across papers to develop a table
of concepts synthesized from the findings.
1. Information Retrieval and
Research
14
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Ethical Considerations: Ethical research
practices must still apply, ensuring the
retrieved is evidence-based, peer-
to privacy regulations such as HIPAA.
• Issues of ChatGPT copyright ownership
considered, as just because a system
does not mean we should.
1. Information Retrieval and
Research
15
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Programs like OpenAI, Bard, Monica, and others can analyze
and detect behavioral health issues and potential diagnoses
from "prompts, "that is, commands, that include short
include short behavioral descriptions to vast patient datasets.
• They can query for signs of substance use, self-harm,
depression, suicidality, etc.
• They can also engage brainstorming sessions to explore
various possible diagnoses, which facts to collect or areas to
explore to arrive at a definitive diagnosis.
• They can incorporate extensive patient data, including medical
history, psychological assessments, and patient demographics.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
16
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• They use natural language processing (NLP) to extract
relevant information from clinical notes, interviews, and
questionnaires.
• They can be instructed to incorporate structured data such
as diagnostic codes (ICD-10), medication history, and
desired treatment outcomes.
• These chatbots can be given established clinical
guidelines or consensus documents to ask how one's
how one's treatment plan needs to be adjusted to comply
with the guidelines.
• They can also engage brainstorming sessions to explore
various possible diagnoses, which facts to collect or areas to
explore to arrive at a definitive diagnosis.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
17
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Ethical Considerations: All protected health
information (PHI) must be meticulously
uploading any prompts.
• Plus, full transparency must be given to
regarding AI's role in their diagnosis.
• Attention to the strong biases inherent to AI
ensure that AI doesn't perpetuate existing
inequalities.
• HIPAA privacy and copyright laws must also
These requirements take time and attention.
• Practitioners are strongly advised only to
activities after due training.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
18
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• These chatbots can develop tailored treatment plans to meet
individual patient needs after considering diagnoses, client or
patient preferences, comorbidities, and responses to previous
treatments.
• Ethical Considerations: Legal and ethical standards for
standards for patient privacy, autonomy, and informed consent
must be upheld.
• Free ChatGPT systems often publicly announce in their Terms
and Conditions files that they own all information entered into
their systems.
3. Personalized Treatment Plans
19
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
20
https://telehealth.org/ai-and-mental-health-is-it-a-
game-changer-for-your-practice/
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Depression-clients’ voices.
• OUD-Narx scores and overdose risk rating.
• Digital Therapeutics: CBT for OUD (Pear) 
Bankrupt
• Akili Interactive Labs: Interactive digital games (like
videogames).
 ADHD, Major depression, ASD, MS.
Other Uses of ChatGPT by
Professionals
Is Facebook’s Suicide Prevention Service
“Research”?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Facebook
Innovation
• Technique is
innovative, novel.
• Facebook taught its
algorithm text to
ignore.
• Proprietary:
Details not
available.
• Informed Consent?
(see below)
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Facebook
Accuracy
• Traditional View: Prediction
requires analysis of hundreds of
factors: race, sex, age, SES,
medical history, etc.
• Record of results? Publication?
• Efficacy across races, sexes,
nationalities?
• False Positive: Unwanted psych
care?
• Users: Wariness enhanced?
• Barnett and Torous, Ann. Int. Med.
(2/12/19)
What is AI’s Clinical Reliability?
How AI has helped:
1. Personal Sensing (“Digital Phenotyping”)
Collecting and analyzing data from sensors
(smartphones, wearables, etc.) to identify behaviors,
thoughts, feelings, and traits.
2. Natural language processing
3. Chatbots
D’Alfonso, Curr Opin Psychol. 2020;36:112–117.
1. Machine Learning
• Predict and classify suicidal thoughts, depression,
schizophrenia with ”high accuracy”.
U. Cal and IBM,
https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental-
mental-health-opportunities-and-challenges-in-developing-intelligent-
intelligent-digital-therapies/
2. Causation v. Correlation
• Better prognosis for pneumonia in asthma patients.
6. Hallucinations
• NEDA’s Tessa: Harmful diet advice to patients with
eating disorders.
7. Generalizability
• When training data do not resemble actual data.
• Watson and chemo.
8. No compassion or empathy
9. No conceptual thinking
10. No common sense
Does AI Threaten Privacy?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Big Data
• Amazon’s Alexa and the NHS: No ? sharing
of patient data.
• Duration of retention of information?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Facebook, again: No opt-in or opt-out.
• Targeted ads?
• HIPAA: N/A. No covered entity, no business
associate.
 Is de-identification obsolete?
• COPPA: N/A: Child committing suicide was less
than 13 years old.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Privacy laws expanding, yet not clear that existing
laws suffice.
Consider California:
1. HIPAA as amended by HITECH
2. Cal. Confidentiality of Medical Information
Act
3. Cal. Online Privacy Protection Act
4. Cal. Consumer Privacy Act
5. California’s Bot Disclosure Law
6. GDPR
• Yet still not certain the law covers info on
apps.
Facial recognition: both privacy and
discrimination laws.
32
Has AI Generated Any Privacy Litigation?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
PM v OpenAI (N.D. Cal. 2023)
• Purported class action alleges OpenAI
violated users’ privacy rights based on data
scraping of social media comments, chat
logs, cookies, contact info, log-in credentials
and financial info.
Do We Need to License AI to Use it in
Healthcare?
Do We Need to License AI to
Use it in Healthcare?
• Practice of clinical psychology includes but is not limited to: ‘Diagnosis and
treatment of mental and emotional disorders’ which consists of the appropriate
diagnosis of mental disorders according to standards of the profession and the
ordering or providing of treatments according to need.
• Va. Code § 54.1-3600
• Other professions have similar statutes across the 50 states & territories.
Do We Need to License AI to
Use it in Healthcare?
• Definitions of medicine, psychology, nursing, etc.:
• Likely broad enough to encompass AI functions.
• An AI system is not human, but if it functions as a HC professional, some propose
licensure or some other regulatory mechanism.
Do We Need to License AI to
Use it in Healthcare?
If licensure is needed:
• If so, in what jurisdiction(s)?
• Consider scope of practice.
What Does FDA Say About AI in
Healthcare?
What Does FDA Say About AI in Healthcare?
• Regulatory framework is not yet fully developed.
• Historical: Drug or device maker wishing to modify product submits proposal,
and supporting data; FDA says yes or no.
• FDA recognizes potential for drug development and the impediments that fusty
regulation could erect.
What Does Federal Drug Administration (FDA) Say
About AI in Healthcare?
• Concerned with transparency (can it be explained? intellectual property) and
security and integrity of data generated; potential for amplifying errors or
biases.
• FDA urges creation of a risk management plan, and care in choice of training
data, testing, validation.
• Pre-determined change control plans.
FDA Approvals of
AI/ML Devices
What Types of Clinical Decision Software
(“CDS”) Will FDA Regulate Most Closely?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
44
FDA Concerns
1. CDS to “inform clinical management for serious or critical situations or
conditions” especially where the health care provider cannot independently
evaluate basis for recommendation.
2. CDS functions intended for patients to inform clinical management of non-
serious conditions or situations, and not intended to help patients evaluate
basis for recommendations.
3. Software that uses patient’s images to create treatment plans for health care
provider review for patients undergoing RT with external beam or
brachytherapy.
May I Use AI in Hiring?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Yes.
• Resume evaluations.
• Scheduling interviews.
• Sourcing data.
What Have the States to Say About AI in
Employment Decisions?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Most States: Silent so far.
• Ill., Md., and NYC: Employers need candidate’s
consent to use AI in hiring.
• NYC: Must prove to a third-party audit company that
Employer’s process was free of sexual or racial biases.
Can AI Be Liable in Tort?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Not human, and not a legal person.
 Cannot be directly liable for its own
negligence or serve as an agent for vicarious
liability.
• Many different SW and HW developers take part.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Control hard to determine, given.
• Discreteness: Parts made at different times in
different places without coordination.
• Diffuseness: Developers may not act in conjunction.
Yet: Consider corporations and ships (an “in rem” action
in admiralty law)
Does AI Owe a Duty to Clients?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• For the court.
• In health care, duty arises from professional relationship.
 Can AI have such a relationship?
 Consulting physician who does not interact with the
patient owes no duty to that patient.
See Irvin v. Smith, 31 P.3d 934, 941 (Kan. 2001);
St. John v. Pope, 901 S.W.2d 420, 424 (Tex. 1995)
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Does AI resemble a consultant?
• Or an MRI, e.g.?
 Epic sepsis model missed 2/3 of cases. JAMA IM
6/21
• Beware Automation Bias
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
55
https://telehealth.org/chatgpt-ai-bias/
Can Plaintiffs Impose a Standard of Care
on AI?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
57
• HCP: Reasonableness
 Can AI ever be unreasonable?
 Is the HCP relying on AI immune from
liability?
 Higher SOC for HCP using AI?
 Will AI endanger state standards of
care?
• Will res ipsa play a role?
 Probably not if the harm is
unexplainable, untraceable, and rare.
• Nor can P establish exclusive control
 But what about the auto pilot cases?
Are AI Errors Foreseeable?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
59
• Foreseeability: A precondition of a finding of negligence.
 Law expects actor to take reasonable steps to reduce the risk of
foreseeable harms.
• Software developer cannot predict how unsupervised AI will solve
the tasks and problems it encounters.
 Machine teaches itself how to solve problems in unpredictable
ways.
 No one knows exactly what factors go into AI system’s decisions
• The unforeseeability of AI decisions is itself foreseeable.
Are AI Errors Foreseeable?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
60
• Computational models to generate recommendations are opaque.
 Algorithms may be non-transparent because they rely on rules we
humans cannot understand.
 No one, not even programmers, knows what factors go into ML.
• AI's solution may not have been foreseeable to a human. Even the
human who designed the AI.
 Does that defeat a claim of duty?
Are AI Errors Foreseeable?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
61
• In a black-box AI system, the result of an AI’s decision may not have
been foreseeable to its creator or user.
 So, will an AI system be immune from liability?
 Will its creator?
Are AI Errors Foreseeable?
What if AI Recommends
Non-standard Treatment?
What if AI Recommends Non-standard
Treatment?
• The progress problem: Arterial blood gas monitoring in premature newborns
circa 1990.
• Non-standard advice: Proceed with caution.
 The tension between progress and tort law.
Can I be Liable for My AI’s Mistake?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Can AI be my agent?
 No ability to negotiate the scope of authorization.
 Cannot dissolve agent-principal relationship.
 Cannot renegotiate its terms.
 An agent can refuse agency; A principal can refuse to
be the master.
• Agency law does contemplate that the agent will use her
discretion in carrying out the principal’s tasks.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Who controls the AI, if anyone?
 AI autonomy is increasing.
• If machine is autonomous, could it not embark on
a frolic and detour beyond the scope of its
employment?
If AI Can be an Agent, What or Who is its
Principal?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Note the decline of the “Captain of the Ship” doctrine.
• Possibilities:
• Component designer?
• Medical device company?
• The owner of the AI’s algorithm?
• Whoever maintains the product?
• Health care professionals?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Possibilities (cont’d):
• Hospitals and health care systems?
• Pharmaceutical companies?
• Professional schools?
• Insurers?
• Regulators?
Could I be Liable for Promoting AI?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
71
• Hospitals: Large investments in robotic
systems, e.g.
 Procedures more expensive.
 By shifting resident teaching time
from standard laparoscopy to robotic
surgery, we may produce “high-cost”
surgeons whom insurers will penalize.
• Damage to the professional
relationship?
 The rapport problem.
Does the Law Require the Patient’s
Informed Consent to Use of AI in Health
Care?
Does the Law Require the Patient’s
Informed Consent to Use of AI in Health
Care?
• Traditional:
 “Every human being of adult years and sound mind has a right to
determine what shall be done with his own body”
Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.)
• AI: What disclosures are required?
(cont’d)
• Explain how AI works?
 What does ‘informed’ mean where no-one knows how
black-box AI works?
• Whether the AI system was trained on a data set
representative of a particular patient population?
• Comparative predictive accuracy and error rates of AI system
across patient subgroups?
• Roles human caregivers and the AI system will play during
each part of a procedure?
(cont’d)
• Whether a medtech or pharma company influenced an
algorithm?
• Compare results with AI and human approaches?
 What if there are no data?
• What if the patient doesn’t want to know?
• Provider’s financial interest in the AI used?
• Disclose AI recommendations HCP disapproves, or COIs?
(cont’d)
• Pedicle screw litigation: Used off-label
 At present, nearly all AI is used off-label.
• Investigative nature of the device's use?
 Rights of subjects in clinical trials?
• Experimental procedures: “most frequent risks and hazards” will
remain unknown until the procedure becomes established.
Will Plaintiffs be Able to Prevail on Product
Liability Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
78
• A creature of state law.
 Theories of liability sound in negligence, strict liability, or breach of
warranty.
• Responsibility of a manufacturer, distributor, or seller of a defective
product.
 Is AI a “product” or a service?
 The law has traditionally held that only personal property in
tangible form can be considered “products.”
The law has traditionally considered software to be a service.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
79
• Claimant must prove the item that caused the injury was defective at
the time it left the seller’s hands.
 By definition, ML changes the product over time.
• Suppose an AI system is used to detect abnormalities on MRIs
automatically and is advertised as a way to improve productivity in
analyzing images,
 No problem interpreting high-resolution images but
 Fails with images of lesser quality.
Likely: A products liability claim for both negligence and failure
to warn.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
80
• No matter how good the algorithm is, or how much better it is than a
human, it will occasionally be wrong.
 Exception to strict liability for unavoidably unsafe products.
(Restatement)
• Imposing strict liability: Would likely slow down or cease production
of this technology.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
Is There a Duty to Warn?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Duty to warn: Traditional
• Products:
1. Manufacturer knew or should have known that the pro
poses substantial risk to the user.
2. Danger would not be obvious to users.
3. Risk of harm justifies the cost of providing a warning.
• Mental Health:
 Tarasoff v. The Regents of the University of California (1
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• LI Rule:
1. Likelihood harm will occur if intermediary does no
pass on the warning to the ultimate user.
2. Magnitude of the probable harm.
3. Probability that the particular intermediary will no
pass on the warning.
4. Ease or burden of the giving of the warning by th
manufacturer to the ultimate user.
Will Plaintiffs be Able to Prove Causation?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Causation will often be tough in AI tort cases.
• Demonstrating the cause of an injury: Already hard in health
care.
 Outcomes frequently probabilistic rather than deterministi
• AI models: Often nonintuitive, even inscrutable.
 Causation even more challenging to demonstrate.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• No design or manufacturing flaw if robot involved in
an accident was properly designed, but based on the
structure of the computing architecture, or the
learning taking place in deep neural networks, an
unexpected error or reasoning flaw could have
occurred.
 Mracek v Bryn Mawr Hospital, 610 F. Supp. 2d
401 (E.D. Pa. 2009), aff ‘d, 363 Fed. Appx. 925, 927
(3d Cir. 2010)
Who is an Expert?
Who is an Expert?
• Trial Court: Cardiologist not qualified to testify on weight loss drug combo that
proprietary software package recommended because doctor is not a
software expert.
 Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018)
(on appeal, reversed)
Who is an Expert?
• MD who had performed many robotic surgeries not qualified on causation for
want of programming expertise.
 Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED
complicating robotic prostatectomy)
Marketing: Should We Expect Breach of
Warranty Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
91
• A warranty may arise by an affirmation of fact or a promise made by
seller relating to the product. See U.C.C. § 2-313.
 Need not use special phrases or formal terms (“guarantee”;
“warranty”)
• Promotion of an AI system as a superior product may create a
cause of action for breach of warranty.
 Darringer v. Intuitive Surgical, Inc., No. 5:15-cv-00300-RMW,
2015 U.S. Dist. LEXIS 101230, at *1, *3 (N.D. Cal. Aug. 3, 2015).
(another DaVinci robot case)
Marketing: Should We Expect
Breach of Warranty Claims?
Is AI a Person?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Is AI a Person?
Of course not..
• Artificial agents lack self-consciousness,
human-like intentions, ability to suffer,
rationality, autonomy, understanding, and
social relations deemed necessary for
moral personhood.
But:
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Is AI a Person?
But:
• Could serve useful cost-spreading and
accountability functions.
• EU Parliament, 2017: Recognizing
autonomous robots as “having the status
of electronic persons responsible for
making good any damage they may
cause”.
 Compulsory insurance scheme
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Opponents
• Harm caused by even fully autonomous
technologies is generally reducible to
risks attributable to natural persons or
existing categories of legal persons.
• Even limited AI personhood (corps, e.g.)
will require robust safeguards such as
having funds or assets assigned to the
AI person.
Will Plaintiffs be Able to Impose
Common Enterprise Liability with
AI?
Example: Hall v. Du Pont, 345 F.Supp. 353
(E.D.N.Y. 1972)
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
98
• 1955-’59: Blasting caps injured 13 kids, 12 incidents, 10 states.
• Claim: Failure to warn.
• Ds: 6 cap mfrs + TA.
• Evidence: Acting independently, Ds adhered to industry-wide safety
standard; delegated labeling to TA; industry-wide cooperation in the
manufacture and design of blasting caps.
• Held: If Ps could show made ≥ 1 D mfr made the caps, burden of
proof on causation would shift to Ds.
Example: Hall Du Pont, 345 F.Supp.
353 (E.D.N.Y. 1972)
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
99
• Theory: Clinicians, manufacturers of clinical AI systems, and
hospitals that employ the systems are engaged in a common
enterprise for tort liability purposes.
 As members of common enterprise, could be held jointly liable.
 Used where Ds strategically formed and used corporate entities to
violate consumer protection law. E.g., Fed. Trade Comm'n v.
Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019)
(corporations were considered to be functioning jointly as a common
enterprise)
(cont’d)
How Can We Defend Ourselves
Against Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
101
• Compliance with FDA regulations: Preemption.
• Policy: No product liability claim encompasses the unpredictable,
autonomous machine-mimicking-human behavior underlying AI’s
medical decision-making.
 Unpredictability of autonomous AI is not a bug, but a feature.
How Can We Defend Ourselves
Against Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
102
• Software is not a Product.
 Rodgers v. Christie, 795 F. App'x 878, 878-79 (3rd Cir. 2020):
Public Safety Assessment (PSA), an algorithm that was part of the
state's pretrial release program, was not a product, so product liability
for the murder of a man by a killer on pre-trial release did not lie.
1. Not disseminated commercially.
2. Algorithm was neither “tangible personal property” nor
tenuously “analogous to” it.
How Can We Defend Ourselves
Against Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
103
• Breach of warranty: Privity
 Typically the clinician, and not the patient, purchased system.
• Product misuse, modification: Progress notes, e.g.
 Seller does not know specifics of these additional records or how
algorithm developed following provider’s use.
• LI doctrine
How Can We Defend Ourselves
Against Claims?
Will AI Put Me Out of Work?
Will AI Put Me Out of Work?
• ChatGPT can outperform 1st and 2nd year medical students in
answering challenging clinical care exam questions.
• Law students: Similar.
• But: Probably not.
(cont’d)
• John Halamka: “Generative AI is not thought, it's not
sentience.”
• Most, if not all, countries are experiencing severe clinician
shortages.
 Shortages are only predicted to get worse in the U.S. until at
least 2030.
(cont’d)
• AI-infused precision health tools might well be essential to
improving the efficiency of care.
• AI might help burn-out: ease the day-to-day weariness,
lethargy, and delay of reviewing patient charts.
• The day may come when the SOC requires use of AI.
Can we Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
109
• Consider a pathology over-read for an in-patient:
• Whether hospital is in- or out-of-network for patient's insurance
• Whether patient's insurer deems AI to be “medically necessary”
• If in-network, what is the negotiated fee for this specific intervention
between this hospital and this patient's insurer
• Whether deal pays for hospitalization per diem or on Diagnosis Related
Group (DRG) basis
• AI might add nothing to charge
• What percentage of co-insurance the patient must pay
• How much of the deductible the patient will have met by end of this
episode of care.
Can We Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
110
Consider an outpatient setting:
• Whether the outpatient facility is in or out-of-network for the
patient's insurer.
• Whether the facility is owned by a hospital.
 If hospital-owned, may add a “facilities fee”.
• Whether this patient's insurer deems the AI to be “medically
necessary”.
• Negotiated fee schedule between facility and the patient's insurer.
• How much of the deductible the patient will have met by the
conclusion of this episode of care.
Can We Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
111
• Provided for "medically necessary" care.
• Not: experimental treatments or devices
• Slow governmental adoption: The telehealth model.
Can We Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
112
• 9/20: CMS approved the 1st add-on payment up to $1,040, + inpatient
hospitalization costs -for use of software to help detect strokes by
Viz.ai
• Whether a 43-patient study used to support the company’s claim of
clinical benefit was large enough to warrant the added
reimbursement?
Can We Get Paid for Using AI?
Can AI Detect or Prevent Fraud?
Can AI Detect or Prevent Fraud?
• One large health insurer reported a savings of $1 billion
annually through AI-prevented FWA.
• Fed. Ct App: Company’s use of AI for prior auth and utilization
management services to MA and Medicaid managed care
plans is subject to qualitative review that may result in liability
for the AI-using entity.
 US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
Can Providers Use AI to Cheat?
Does AI Infringe Copyright?
J. DOE 1 et al. v. GitHub, Inc. et al., Case
No. 4:22-cv-06823-JST (N.D. Cal. 2022):
• Ps: They and class own copyrighted materials made available
publicly on GitHub.
• Ps: Representing class, assert 12 causes of action, including
violations of Digital Millennium Copyright Act, California
Consumer Privacy Act, and breach of contract.
Claim:
• Defendants' OpenAI's Codex and GitHub's Copilot generate
suggestions nearly identical to code scraped from public
GitHub repositories, without giving the attribution required
under the applicable license.
Defenses:
1. Standing. Did these Plaintiffs suffer injury?
2. Intent: Copilot, as a neutral technology, cannot satisfy
DMCA’s § 1202's intent and knowledge requirements.
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
120
https://telehealth.org/ai-copyright-
chatgpt-copyright/
What Other Issues Should We Consider?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Ownership of data
• Antitrust
 Algorithmic pricing can be highly
competitive.
 But competitors could use the same
software to collude.
Does AI Engage in Invidious
Discrimination?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
124
Training data key:
• A facial recognition AI software was unable to accurately identify
> 1/3 of BFs in a photo lineup.
 Algorithm was trained on a majority male and white dataset.
Does AI Engage in Invidious
Discrimination?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
125
Optum:
• Algorithm to identify high-risk patients to inform fund
allocation. Used health care costs to make predictions.
 Only 17.7% of black patients were identified as high-risk; true
number should have been ~ 46.5%.
 Spending for black patients lower than for white patients owing
to “unequal access to care”.
Does AI Engage in Invidious
Discrimination?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
126
• Julia Angwin et al., “Machine Bias,” ProPublica (May 23, 2016),
https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing
• Emily Berman, “A Government of Laws and Not of Machines,” 98
B.U.L. Rev. 1278, 1315, 1316 (2018)
• Karni Chagal-Feferkorn, “The Reasonable Algorithm,” U. Ill. J.L.
Tech. & Pol'y (forthcoming 2018)
• Duke Margolis Center for Health Policy, “Current State and Near-
Term Priorities for AI-Enabled Diagnostic Support Software in
Health Care” (2019)
References
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
127
• Cade Metz and Craig S. Smith, “Warnings of a Dark Side to A.I. in
Health Care,” NY Times (3/21/19)
• Daniel Schiff and Jason theBorenstein, “How Should Clinicians
Communicate With Patients About Roles of Artificially Intelligent
Team Members?” 21(2) AMA Journal of Ethics E138-145 (Feb.
2019)
• Nicolas P. Terry, “Appification, AI, and Healthcare's New Iron
Triangle,” [Automation, Value, and Empathy] 20 J. Health Care L. &
Pol'y 118 (2018)
• Wendell Wallach, A Dangerous Master 239-43 (2015). Andrew Tutt,
“An FDA for Algorithms,” 69 Admin. L. Rev. 83, 104 (2018)
References
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Final
questions?
128
Telehealth.org
contact@telehealth.org
619-255-2788
Keep in touch! 

More Related Content

Similar to Here are a few key points regarding licensing and using AI in healthcare:- Using AI to directly diagnose or treat patients would likely constitute the unlicensed practice of medicine/psychology and be illegal without proper oversight and credentials. - AI can be used as a decision support tool to help inform diagnoses and treatment plans, but a licensed professional still needs to make the final determination and be directly responsible for the patient's care.- Some AI applications like digital therapeutics may require their own FDA clearance/approval depending on their intended use and risks. - Chatbots and other AI without direct access to protected health information do not necessarily require specific licensing, but ethical and legal standards still apply regarding issues like privacy, informed

Hipaa.ppt1
Hipaa.ppt1Hipaa.ppt1
Hipaa.ppt1akwei2
 
Hipaa.ppt2
Hipaa.ppt2Hipaa.ppt2
Hipaa.ppt2akwei2
 
People, health professionals and health information Working together in 2014
People, health professionals and health information Working together in 2014People, health professionals and health information Working together in 2014
People, health professionals and health information Working together in 2014Health Informatics New Zealand
 
HIPAA Compliant Social Media for Professionals
HIPAA Compliant Social Media for ProfessionalsHIPAA Compliant Social Media for Professionals
HIPAA Compliant Social Media for ProfessionalsMarlene Maheu
 
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...Marlene Maheu
 
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...Conference Panel
 
Privacy, Confidentiality, and Security Lecture 2_slides
Privacy, Confidentiality, and Security Lecture 2_slidesPrivacy, Confidentiality, and Security Lecture 2_slides
Privacy, Confidentiality, and Security Lecture 2_slidesZakCooper1
 
Person-generated health data: How can it help us to feel better?
Person-generated health data: How can it help us to feel better?Person-generated health data: How can it help us to feel better?
Person-generated health data: How can it help us to feel better?Kathleen Gray
 
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counselingMaheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counselingTom Wilson
 
Smartphone Apps - Evidence Based Considerations for Psychology
Smartphone Apps  - Evidence Based Considerations for PsychologySmartphone Apps  - Evidence Based Considerations for Psychology
Smartphone Apps - Evidence Based Considerations for PsychologyMarlene Maheu
 
Information systems for health decision making - a citizen's perspective
Information systems for health decision making - a citizen's perspectiveInformation systems for health decision making - a citizen's perspective
Information systems for health decision making - a citizen's perspectiveErdem Yazganoglu
 
Telehealth Clinical Best Practices Workshop I 5 23-2020
Telehealth Clinical Best Practices Workshop I 5 23-2020Telehealth Clinical Best Practices Workshop I 5 23-2020
Telehealth Clinical Best Practices Workshop I 5 23-2020Marlene Maheu
 
Przybysz, reinhardt ph rgroupproject_fall_2012
Przybysz, reinhardt ph rgroupproject_fall_2012Przybysz, reinhardt ph rgroupproject_fall_2012
Przybysz, reinhardt ph rgroupproject_fall_2012jlreinhardt
 
In search of a digital health compass: My data, my decision, our power
In search of a digital health compass: My data, my decision, our powerIn search of a digital health compass: My data, my decision, our power
In search of a digital health compass: My data, my decision, our powerchronaki
 
Care data against
Care data   againstCare data   against
Care data against3GDR
 
Practicing Over State Lines 1.26.24.pptx
Practicing Over State Lines 1.26.24.pptxPracticing Over State Lines 1.26.24.pptx
Practicing Over State Lines 1.26.24.pptxMarlene Maheu
 
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptxLesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptxMarlene Maheu
 
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...Marlene Maheu
 

Similar to Here are a few key points regarding licensing and using AI in healthcare:- Using AI to directly diagnose or treat patients would likely constitute the unlicensed practice of medicine/psychology and be illegal without proper oversight and credentials. - AI can be used as a decision support tool to help inform diagnoses and treatment plans, but a licensed professional still needs to make the final determination and be directly responsible for the patient's care.- Some AI applications like digital therapeutics may require their own FDA clearance/approval depending on their intended use and risks. - Chatbots and other AI without direct access to protected health information do not necessarily require specific licensing, but ethical and legal standards still apply regarding issues like privacy, informed (20)

Hipaa.ppt1
Hipaa.ppt1Hipaa.ppt1
Hipaa.ppt1
 
Hipaa.ppt2
Hipaa.ppt2Hipaa.ppt2
Hipaa.ppt2
 
Dustin HIPAA
Dustin HIPAADustin HIPAA
Dustin HIPAA
 
People, health professionals and health information Working together in 2014
People, health professionals and health information Working together in 2014People, health professionals and health information Working together in 2014
People, health professionals and health information Working together in 2014
 
HIPAA Compliant Social Media for Professionals
HIPAA Compliant Social Media for ProfessionalsHIPAA Compliant Social Media for Professionals
HIPAA Compliant Social Media for Professionals
 
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
 
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...
 
Privacy, Confidentiality, and Security Lecture 2_slides
Privacy, Confidentiality, and Security Lecture 2_slidesPrivacy, Confidentiality, and Security Lecture 2_slides
Privacy, Confidentiality, and Security Lecture 2_slides
 
Nicolas Terry, "Big Data, Regulatory Disruption, and Arbitrage in Health Care"
Nicolas Terry, "Big Data, Regulatory Disruption, and Arbitrage in Health Care"Nicolas Terry, "Big Data, Regulatory Disruption, and Arbitrage in Health Care"
Nicolas Terry, "Big Data, Regulatory Disruption, and Arbitrage in Health Care"
 
Person-generated health data: How can it help us to feel better?
Person-generated health data: How can it help us to feel better?Person-generated health data: How can it help us to feel better?
Person-generated health data: How can it help us to feel better?
 
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counselingMaheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
 
Smartphone Apps - Evidence Based Considerations for Psychology
Smartphone Apps  - Evidence Based Considerations for PsychologySmartphone Apps  - Evidence Based Considerations for Psychology
Smartphone Apps - Evidence Based Considerations for Psychology
 
Information systems for health decision making - a citizen's perspective
Information systems for health decision making - a citizen's perspectiveInformation systems for health decision making - a citizen's perspective
Information systems for health decision making - a citizen's perspective
 
Telehealth Clinical Best Practices Workshop I 5 23-2020
Telehealth Clinical Best Practices Workshop I 5 23-2020Telehealth Clinical Best Practices Workshop I 5 23-2020
Telehealth Clinical Best Practices Workshop I 5 23-2020
 
Przybysz, reinhardt ph rgroupproject_fall_2012
Przybysz, reinhardt ph rgroupproject_fall_2012Przybysz, reinhardt ph rgroupproject_fall_2012
Przybysz, reinhardt ph rgroupproject_fall_2012
 
In search of a digital health compass: My data, my decision, our power
In search of a digital health compass: My data, my decision, our powerIn search of a digital health compass: My data, my decision, our power
In search of a digital health compass: My data, my decision, our power
 
Care data against
Care data   againstCare data   against
Care data against
 
Practicing Over State Lines 1.26.24.pptx
Practicing Over State Lines 1.26.24.pptxPracticing Over State Lines 1.26.24.pptx
Practicing Over State Lines 1.26.24.pptx
 
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptxLesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
 
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
 

More from Marlene Maheu

2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...
2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...
2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...Marlene Maheu
 
When Sex Gets Complicated: Porn, Affairs, & Cybersex
When Sex Gets Complicated: Porn, Affairs, & CybersexWhen Sex Gets Complicated: Porn, Affairs, & Cybersex
When Sex Gets Complicated: Porn, Affairs, & CybersexMarlene Maheu
 
Post-Test Tutorial Unlimited Attempts.pdf
Post-Test Tutorial Unlimited Attempts.pdfPost-Test Tutorial Unlimited Attempts.pdf
Post-Test Tutorial Unlimited Attempts.pdfMarlene Maheu
 
Post-Test Tutorial 3 Attempts.pdf
Post-Test Tutorial 3 Attempts.pdfPost-Test Tutorial 3 Attempts.pdf
Post-Test Tutorial 3 Attempts.pdfMarlene Maheu
 
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...Marlene Maheu
 
HIPAA Compliant Cybersecurity: Practical Implementation Tips
HIPAA Compliant Cybersecurity: Practical Implementation TipsHIPAA Compliant Cybersecurity: Practical Implementation Tips
HIPAA Compliant Cybersecurity: Practical Implementation TipsMarlene Maheu
 
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...Marlene Maheu
 
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptxMarlene Maheu
 
Developing Webinars & Podcasts for Maximum Profit
Developing Webinars & Podcasts for Maximum ProfitDeveloping Webinars & Podcasts for Maximum Profit
Developing Webinars & Podcasts for Maximum ProfitMarlene Maheu
 
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptx
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptxAdvanced Telehealth Clinical Best Practices: Crisis Planning.pptx
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptxMarlene Maheu
 
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...Marlene Maheu
 
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...Marlene Maheu
 
Understanding Fundamentals of Behavioral Legal & Ethical Marketing
Understanding Fundamentals of Behavioral Legal & Ethical MarketingUnderstanding Fundamentals of Behavioral Legal & Ethical Marketing
Understanding Fundamentals of Behavioral Legal & Ethical MarketingMarlene Maheu
 
Identifying Your Optimal Niche Focus
Identifying Your Optimal Niche FocusIdentifying Your Optimal Niche Focus
Identifying Your Optimal Niche FocusMarlene Maheu
 
Telehealth Jobs from Home.pptx
Telehealth Jobs from Home.pptxTelehealth Jobs from Home.pptx
Telehealth Jobs from Home.pptxMarlene Maheu
 
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum ProfitLesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum ProfitMarlene Maheu
 
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...Marlene Maheu
 
Lesson 3 - Optimizing Your Website
Lesson 3 - Optimizing Your Website Lesson 3 - Optimizing Your Website
Lesson 3 - Optimizing Your Website Marlene Maheu
 
767W Redo_Telehealth Jobs from Home.pptx
767W Redo_Telehealth Jobs from Home.pptx767W Redo_Telehealth Jobs from Home.pptx
767W Redo_Telehealth Jobs from Home.pptxMarlene Maheu
 
Autism Telehealth webinar slide deck - 03122022.pptx
Autism Telehealth webinar slide deck - 03122022.pptxAutism Telehealth webinar slide deck - 03122022.pptx
Autism Telehealth webinar slide deck - 03122022.pptxMarlene Maheu
 

More from Marlene Maheu (20)

2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...
2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...
2024 March 11, Telehealth Billing- Current Telehealth CPT Codes & Telehealth ...
 
When Sex Gets Complicated: Porn, Affairs, & Cybersex
When Sex Gets Complicated: Porn, Affairs, & CybersexWhen Sex Gets Complicated: Porn, Affairs, & Cybersex
When Sex Gets Complicated: Porn, Affairs, & Cybersex
 
Post-Test Tutorial Unlimited Attempts.pdf
Post-Test Tutorial Unlimited Attempts.pdfPost-Test Tutorial Unlimited Attempts.pdf
Post-Test Tutorial Unlimited Attempts.pdf
 
Post-Test Tutorial 3 Attempts.pdf
Post-Test Tutorial 3 Attempts.pdfPost-Test Tutorial 3 Attempts.pdf
Post-Test Tutorial 3 Attempts.pdf
 
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
 
HIPAA Compliant Cybersecurity: Practical Implementation Tips
HIPAA Compliant Cybersecurity: Practical Implementation TipsHIPAA Compliant Cybersecurity: Practical Implementation Tips
HIPAA Compliant Cybersecurity: Practical Implementation Tips
 
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
 
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
 
Developing Webinars & Podcasts for Maximum Profit
Developing Webinars & Podcasts for Maximum ProfitDeveloping Webinars & Podcasts for Maximum Profit
Developing Webinars & Podcasts for Maximum Profit
 
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptx
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptxAdvanced Telehealth Clinical Best Practices: Crisis Planning.pptx
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptx
 
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
 
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
 
Understanding Fundamentals of Behavioral Legal & Ethical Marketing
Understanding Fundamentals of Behavioral Legal & Ethical MarketingUnderstanding Fundamentals of Behavioral Legal & Ethical Marketing
Understanding Fundamentals of Behavioral Legal & Ethical Marketing
 
Identifying Your Optimal Niche Focus
Identifying Your Optimal Niche FocusIdentifying Your Optimal Niche Focus
Identifying Your Optimal Niche Focus
 
Telehealth Jobs from Home.pptx
Telehealth Jobs from Home.pptxTelehealth Jobs from Home.pptx
Telehealth Jobs from Home.pptx
 
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum ProfitLesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
 
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
 
Lesson 3 - Optimizing Your Website
Lesson 3 - Optimizing Your Website Lesson 3 - Optimizing Your Website
Lesson 3 - Optimizing Your Website
 
767W Redo_Telehealth Jobs from Home.pptx
767W Redo_Telehealth Jobs from Home.pptx767W Redo_Telehealth Jobs from Home.pptx
767W Redo_Telehealth Jobs from Home.pptx
 
Autism Telehealth webinar slide deck - 03122022.pptx
Autism Telehealth webinar slide deck - 03122022.pptxAutism Telehealth webinar slide deck - 03122022.pptx
Autism Telehealth webinar slide deck - 03122022.pptx
 

Recently uploaded

4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfTechSoup
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 

Recently uploaded (20)

4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 

Here are a few key points regarding licensing and using AI in healthcare:- Using AI to directly diagnose or treat patients would likely constitute the unlicensed practice of medicine/psychology and be illegal without proper oversight and credentials. - AI can be used as a decision support tool to help inform diagnoses and treatment plans, but a licensed professional still needs to make the final determination and be directly responsible for the patient's care.- Some AI applications like digital therapeutics may require their own FDA clearance/approval depending on their intended use and risks. - Chatbots and other AI without direct access to protected health information do not necessarily require specific licensing, but ethical and legal standards still apply regarding issues like privacy, informed

  • 1. Therapist AI & ChatGPT: How to Use Legally & Ethically
  • 2. Joseph P. McMenamin, MD, JD, FCLM Joe McMenamin is a partner at Christian & Barton in Richmond, Virginia. His practice concentrates on digital health and on the application of AI in healthcare. He is an Associate Professor of Legal Medicine at Virginia Commonwealth University and Board-certified in Legal Medicine.
  • 3. Marlene M. Maheu, PhD Marlene Maheu, PhD has been a pioneer in telemental health for three decades. With five textbooks, dozens of book chapters, and journal articles to her name, she is the Founder and CEO of the Telebehavioral Health Institute (TBHI). She is the CEO of the Coalition for Technology in Behavioral Science (CTiBS), and the Founder of the Journal for Technology in Behavioral Science.
  • 4. © 1994-2023 Telehealth.org, LLC All rights reserved. 4 And you? Please introduce yourself with city and specialty 
  • 5. © 1994-2023 Telehealth.org, LLC All rights reserved. • Participants will be able to outline an array of legal and ethical issues implicated by the use of therapist AI and ChatGPT. • Name the primary reason ChatGPT is not likely to replace psychotherapists in our lifetimes. • Outline how to best minimize therapist AI and ChatGPT ethical risks today. Learning Objectives 5
  • 6. Preventing Interruptions Maximize your learning by: • Making a to-do list as we go. • Turning on your camera & join the conversation throughout this activity. • Muting your phone. • Asking family and friends to stay away. We will not be discussing all slides.
  • 7. • Mr. McMenamin speaks neither for any legal client nor for Telehealth.org • Is neither a technical expert nor an Intellectual Property lawyer. • Offers information about the law, not legal advice. • Labors under a dearth of legal authorities specific to AI. Speaker Disclaimers
  • 8. • Must treat some subjects in cursory fashion only. • Presents theories of liability as illustrations, conceding nothing as to their validity. • Criticizes no person or entity, nor AI. • In this presentation, neither creates nor seeks to create an attorney-client relationship with any member of the audience. Speaker Disclaimers
  • 10. © 1994-2023 Telehealth.org, LLC All rights reserved. If you have begun or are considering using AI or ChatGPT in your work, please outline those activities in the chat box. 10
  • 11. We will proceed with the presentation while you do so, then we will come back later.
  • 12. What are AI and ChatGPT? ?
  • 13. © 1994-2023 Telehealth.org, LLC All rights reserved. Three Primary Areas: 1. Information Retrieval and Research 2. Personalized Case Analysis, Diagnosis & Treatment Plans 3. Client & Patient Education How are AI & ChatGPT being used to help healthcare practices? 13
  • 14. © 1994-2023 Telehealth.org, LLC All rights reserved. • Programs like Elicit and Claude can provide advanced research capabilities that exceed traditional methods. methods. • For example, AI at Elicit can extract information from up to 100 papers and present the information in a structured table. • It can find scientific papers on a question or topic and organize the data collected into a table. • It can also discover concepts across papers to develop a table of concepts synthesized from the findings. 1. Information Retrieval and Research 14
  • 15. © 1994-2023 Telehealth.org, LLC All rights reserved. • Ethical Considerations: Ethical research practices must still apply, ensuring the retrieved is evidence-based, peer- to privacy regulations such as HIPAA. • Issues of ChatGPT copyright ownership considered, as just because a system does not mean we should. 1. Information Retrieval and Research 15
  • 16. © 1994-2023 Telehealth.org, LLC All rights reserved. • Programs like OpenAI, Bard, Monica, and others can analyze and detect behavioral health issues and potential diagnoses from "prompts, "that is, commands, that include short include short behavioral descriptions to vast patient datasets. • They can query for signs of substance use, self-harm, depression, suicidality, etc. • They can also engage brainstorming sessions to explore various possible diagnoses, which facts to collect or areas to explore to arrive at a definitive diagnosis. • They can incorporate extensive patient data, including medical history, psychological assessments, and patient demographics. 2. Personalized Case Analysis, Diagnosis & Treatment Plans 16
  • 17. © 1994-2023 Telehealth.org, LLC All rights reserved. • They use natural language processing (NLP) to extract relevant information from clinical notes, interviews, and questionnaires. • They can be instructed to incorporate structured data such as diagnostic codes (ICD-10), medication history, and desired treatment outcomes. • These chatbots can be given established clinical guidelines or consensus documents to ask how one's how one's treatment plan needs to be adjusted to comply with the guidelines. • They can also engage brainstorming sessions to explore various possible diagnoses, which facts to collect or areas to explore to arrive at a definitive diagnosis. 2. Personalized Case Analysis, Diagnosis & Treatment Plans 17
  • 18. © 1994-2023 Telehealth.org, LLC All rights reserved. • Ethical Considerations: All protected health information (PHI) must be meticulously uploading any prompts. • Plus, full transparency must be given to regarding AI's role in their diagnosis. • Attention to the strong biases inherent to AI ensure that AI doesn't perpetuate existing inequalities. • HIPAA privacy and copyright laws must also These requirements take time and attention. • Practitioners are strongly advised only to activities after due training. 2. Personalized Case Analysis, Diagnosis & Treatment Plans 18
  • 19. © 1994-2023 Telehealth.org, LLC All rights reserved. • These chatbots can develop tailored treatment plans to meet individual patient needs after considering diagnoses, client or patient preferences, comorbidities, and responses to previous treatments. • Ethical Considerations: Legal and ethical standards for standards for patient privacy, autonomy, and informed consent must be upheld. • Free ChatGPT systems often publicly announce in their Terms and Conditions files that they own all information entered into their systems. 3. Personalized Treatment Plans 19
  • 21. © 1994-2023 Telehealth.org, LLC All rights reserved. • Depression-clients’ voices. • OUD-Narx scores and overdose risk rating. • Digital Therapeutics: CBT for OUD (Pear)  Bankrupt • Akili Interactive Labs: Interactive digital games (like videogames).  ADHD, Major depression, ASD, MS. Other Uses of ChatGPT by Professionals
  • 22. Is Facebook’s Suicide Prevention Service “Research”?
  • 23. © 1994-2023 Telehealth.org, LLC All rights reserved. Facebook Innovation • Technique is innovative, novel. • Facebook taught its algorithm text to ignore. • Proprietary: Details not available. • Informed Consent? (see below)
  • 24. © 1994-2023 Telehealth.org, LLC All rights reserved. Facebook Accuracy • Traditional View: Prediction requires analysis of hundreds of factors: race, sex, age, SES, medical history, etc. • Record of results? Publication? • Efficacy across races, sexes, nationalities? • False Positive: Unwanted psych care? • Users: Wariness enhanced? • Barnett and Torous, Ann. Int. Med. (2/12/19)
  • 25. What is AI’s Clinical Reliability?
  • 26. How AI has helped: 1. Personal Sensing (“Digital Phenotyping”) Collecting and analyzing data from sensors (smartphones, wearables, etc.) to identify behaviors, thoughts, feelings, and traits. 2. Natural language processing 3. Chatbots D’Alfonso, Curr Opin Psychol. 2020;36:112–117.
  • 27. 1. Machine Learning • Predict and classify suicidal thoughts, depression, schizophrenia with ”high accuracy”. U. Cal and IBM, https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental- mental-health-opportunities-and-challenges-in-developing-intelligent- intelligent-digital-therapies/ 2. Causation v. Correlation • Better prognosis for pneumonia in asthma patients.
  • 28. 6. Hallucinations • NEDA’s Tessa: Harmful diet advice to patients with eating disorders. 7. Generalizability • When training data do not resemble actual data. • Watson and chemo. 8. No compassion or empathy 9. No conceptual thinking 10. No common sense
  • 29. Does AI Threaten Privacy?
  • 30. © 1994-2023 Telehealth.org, LLC All rights reserved. Big Data • Amazon’s Alexa and the NHS: No ? sharing of patient data. • Duration of retention of information?
  • 31. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Facebook, again: No opt-in or opt-out. • Targeted ads? • HIPAA: N/A. No covered entity, no business associate.  Is de-identification obsolete? • COPPA: N/A: Child committing suicide was less than 13 years old.
  • 32. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. Privacy laws expanding, yet not clear that existing laws suffice. Consider California: 1. HIPAA as amended by HITECH 2. Cal. Confidentiality of Medical Information Act 3. Cal. Online Privacy Protection Act 4. Cal. Consumer Privacy Act 5. California’s Bot Disclosure Law 6. GDPR • Yet still not certain the law covers info on apps. Facial recognition: both privacy and discrimination laws. 32
  • 33. Has AI Generated Any Privacy Litigation?
  • 34. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. PM v OpenAI (N.D. Cal. 2023) • Purported class action alleges OpenAI violated users’ privacy rights based on data scraping of social media comments, chat logs, cookies, contact info, log-in credentials and financial info.
  • 35. Do We Need to License AI to Use it in Healthcare?
  • 36. Do We Need to License AI to Use it in Healthcare? • Practice of clinical psychology includes but is not limited to: ‘Diagnosis and treatment of mental and emotional disorders’ which consists of the appropriate diagnosis of mental disorders according to standards of the profession and the ordering or providing of treatments according to need. • Va. Code § 54.1-3600 • Other professions have similar statutes across the 50 states & territories.
  • 37. Do We Need to License AI to Use it in Healthcare? • Definitions of medicine, psychology, nursing, etc.: • Likely broad enough to encompass AI functions. • An AI system is not human, but if it functions as a HC professional, some propose licensure or some other regulatory mechanism.
  • 38. Do We Need to License AI to Use it in Healthcare? If licensure is needed: • If so, in what jurisdiction(s)? • Consider scope of practice.
  • 39. What Does FDA Say About AI in Healthcare?
  • 40. What Does FDA Say About AI in Healthcare? • Regulatory framework is not yet fully developed. • Historical: Drug or device maker wishing to modify product submits proposal, and supporting data; FDA says yes or no. • FDA recognizes potential for drug development and the impediments that fusty regulation could erect.
  • 41. What Does Federal Drug Administration (FDA) Say About AI in Healthcare? • Concerned with transparency (can it be explained? intellectual property) and security and integrity of data generated; potential for amplifying errors or biases. • FDA urges creation of a risk management plan, and care in choice of training data, testing, validation. • Pre-determined change control plans.
  • 43. What Types of Clinical Decision Software (“CDS”) Will FDA Regulate Most Closely?
  • 44. © 1994-2023 Telehealth.org, LLC All rights reserved. 44 FDA Concerns 1. CDS to “inform clinical management for serious or critical situations or conditions” especially where the health care provider cannot independently evaluate basis for recommendation. 2. CDS functions intended for patients to inform clinical management of non- serious conditions or situations, and not intended to help patients evaluate basis for recommendations. 3. Software that uses patient’s images to create treatment plans for health care provider review for patients undergoing RT with external beam or brachytherapy.
  • 45. May I Use AI in Hiring?
  • 47. What Have the States to Say About AI in Employment Decisions?
  • 48. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Most States: Silent so far. • Ill., Md., and NYC: Employers need candidate’s consent to use AI in hiring. • NYC: Must prove to a third-party audit company that Employer’s process was free of sexual or racial biases.
  • 49. Can AI Be Liable in Tort?
  • 50. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Not human, and not a legal person.  Cannot be directly liable for its own negligence or serve as an agent for vicarious liability. • Many different SW and HW developers take part.
  • 51. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Control hard to determine, given. • Discreteness: Parts made at different times in different places without coordination. • Diffuseness: Developers may not act in conjunction. Yet: Consider corporations and ships (an “in rem” action in admiralty law)
  • 52. Does AI Owe a Duty to Clients?
  • 53. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • For the court. • In health care, duty arises from professional relationship.  Can AI have such a relationship?  Consulting physician who does not interact with the patient owes no duty to that patient. See Irvin v. Smith, 31 P.3d 934, 941 (Kan. 2001); St. John v. Pope, 901 S.W.2d 420, 424 (Tex. 1995)
  • 54. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Does AI resemble a consultant? • Or an MRI, e.g.?  Epic sepsis model missed 2/3 of cases. JAMA IM 6/21 • Beware Automation Bias
  • 56. Can Plaintiffs Impose a Standard of Care on AI?
  • 57. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. 57 • HCP: Reasonableness  Can AI ever be unreasonable?  Is the HCP relying on AI immune from liability?  Higher SOC for HCP using AI?  Will AI endanger state standards of care? • Will res ipsa play a role?  Probably not if the harm is unexplainable, untraceable, and rare. • Nor can P establish exclusive control  But what about the auto pilot cases?
  • 58. Are AI Errors Foreseeable?
  • 59. © 1994-2023 Telehealth.org, LLC All rights reserved. 59 • Foreseeability: A precondition of a finding of negligence.  Law expects actor to take reasonable steps to reduce the risk of foreseeable harms. • Software developer cannot predict how unsupervised AI will solve the tasks and problems it encounters.  Machine teaches itself how to solve problems in unpredictable ways.  No one knows exactly what factors go into AI system’s decisions • The unforeseeability of AI decisions is itself foreseeable. Are AI Errors Foreseeable?
  • 60. © 1994-2023 Telehealth.org, LLC All rights reserved. 60 • Computational models to generate recommendations are opaque.  Algorithms may be non-transparent because they rely on rules we humans cannot understand.  No one, not even programmers, knows what factors go into ML. • AI's solution may not have been foreseeable to a human. Even the human who designed the AI.  Does that defeat a claim of duty? Are AI Errors Foreseeable?
  • 61. © 1994-2023 Telehealth.org, LLC All rights reserved. 61 • In a black-box AI system, the result of an AI’s decision may not have been foreseeable to its creator or user.  So, will an AI system be immune from liability?  Will its creator? Are AI Errors Foreseeable?
  • 62. What if AI Recommends Non-standard Treatment?
  • 63. What if AI Recommends Non-standard Treatment? • The progress problem: Arterial blood gas monitoring in premature newborns circa 1990. • Non-standard advice: Proceed with caution.  The tension between progress and tort law.
  • 64. Can I be Liable for My AI’s Mistake?
  • 65. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Can AI be my agent?  No ability to negotiate the scope of authorization.  Cannot dissolve agent-principal relationship.  Cannot renegotiate its terms.  An agent can refuse agency; A principal can refuse to be the master. • Agency law does contemplate that the agent will use her discretion in carrying out the principal’s tasks.
  • 66. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Who controls the AI, if anyone?  AI autonomy is increasing. • If machine is autonomous, could it not embark on a frolic and detour beyond the scope of its employment?
  • 67. If AI Can be an Agent, What or Who is its Principal?
  • 68. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Note the decline of the “Captain of the Ship” doctrine. • Possibilities: • Component designer? • Medical device company? • The owner of the AI’s algorithm? • Whoever maintains the product? • Health care professionals?
  • 69. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. Possibilities (cont’d): • Hospitals and health care systems? • Pharmaceutical companies? • Professional schools? • Insurers? • Regulators?
  • 70. Could I be Liable for Promoting AI?
  • 71. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. 71 • Hospitals: Large investments in robotic systems, e.g.  Procedures more expensive.  By shifting resident teaching time from standard laparoscopy to robotic surgery, we may produce “high-cost” surgeons whom insurers will penalize. • Damage to the professional relationship?  The rapport problem.
  • 72. Does the Law Require the Patient’s Informed Consent to Use of AI in Health Care?
  • 73. Does the Law Require the Patient’s Informed Consent to Use of AI in Health Care? • Traditional:  “Every human being of adult years and sound mind has a right to determine what shall be done with his own body” Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.) • AI: What disclosures are required?
  • 74. (cont’d) • Explain how AI works?  What does ‘informed’ mean where no-one knows how black-box AI works? • Whether the AI system was trained on a data set representative of a particular patient population? • Comparative predictive accuracy and error rates of AI system across patient subgroups? • Roles human caregivers and the AI system will play during each part of a procedure?
  • 75. (cont’d) • Whether a medtech or pharma company influenced an algorithm? • Compare results with AI and human approaches?  What if there are no data? • What if the patient doesn’t want to know? • Provider’s financial interest in the AI used? • Disclose AI recommendations HCP disapproves, or COIs?
  • 76. (cont’d) • Pedicle screw litigation: Used off-label  At present, nearly all AI is used off-label. • Investigative nature of the device's use?  Rights of subjects in clinical trials? • Experimental procedures: “most frequent risks and hazards” will remain unknown until the procedure becomes established.
  • 77. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 78. © 1994-2023 Telehealth.org, LLC All rights reserved. 78 • A creature of state law.  Theories of liability sound in negligence, strict liability, or breach of warranty. • Responsibility of a manufacturer, distributor, or seller of a defective product.  Is AI a “product” or a service?  The law has traditionally held that only personal property in tangible form can be considered “products.” The law has traditionally considered software to be a service. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 79. © 1994-2023 Telehealth.org, LLC All rights reserved. 79 • Claimant must prove the item that caused the injury was defective at the time it left the seller’s hands.  By definition, ML changes the product over time. • Suppose an AI system is used to detect abnormalities on MRIs automatically and is advertised as a way to improve productivity in analyzing images,  No problem interpreting high-resolution images but  Fails with images of lesser quality. Likely: A products liability claim for both negligence and failure to warn. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 80. © 1994-2023 Telehealth.org, LLC All rights reserved. 80 • No matter how good the algorithm is, or how much better it is than a human, it will occasionally be wrong.  Exception to strict liability for unavoidably unsafe products. (Restatement) • Imposing strict liability: Would likely slow down or cease production of this technology. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 81. Is There a Duty to Warn?
  • 82. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. Duty to warn: Traditional • Products: 1. Manufacturer knew or should have known that the pro poses substantial risk to the user. 2. Danger would not be obvious to users. 3. Risk of harm justifies the cost of providing a warning. • Mental Health:  Tarasoff v. The Regents of the University of California (1
  • 83. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • LI Rule: 1. Likelihood harm will occur if intermediary does no pass on the warning to the ultimate user. 2. Magnitude of the probable harm. 3. Probability that the particular intermediary will no pass on the warning. 4. Ease or burden of the giving of the warning by th manufacturer to the ultimate user.
  • 84. Will Plaintiffs be Able to Prove Causation?
  • 85. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Causation will often be tough in AI tort cases. • Demonstrating the cause of an injury: Already hard in health care.  Outcomes frequently probabilistic rather than deterministi • AI models: Often nonintuitive, even inscrutable.  Causation even more challenging to demonstrate.
  • 86. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • No design or manufacturing flaw if robot involved in an accident was properly designed, but based on the structure of the computing architecture, or the learning taking place in deep neural networks, an unexpected error or reasoning flaw could have occurred.  Mracek v Bryn Mawr Hospital, 610 F. Supp. 2d 401 (E.D. Pa. 2009), aff ‘d, 363 Fed. Appx. 925, 927 (3d Cir. 2010)
  • 87. Who is an Expert?
  • 88. Who is an Expert? • Trial Court: Cardiologist not qualified to testify on weight loss drug combo that proprietary software package recommended because doctor is not a software expert.  Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018) (on appeal, reversed)
  • 89. Who is an Expert? • MD who had performed many robotic surgeries not qualified on causation for want of programming expertise.  Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED complicating robotic prostatectomy)
  • 90. Marketing: Should We Expect Breach of Warranty Claims?
  • 91. © 1994-2023 Telehealth.org, LLC All rights reserved. 91 • A warranty may arise by an affirmation of fact or a promise made by seller relating to the product. See U.C.C. § 2-313.  Need not use special phrases or formal terms (“guarantee”; “warranty”) • Promotion of an AI system as a superior product may create a cause of action for breach of warranty.  Darringer v. Intuitive Surgical, Inc., No. 5:15-cv-00300-RMW, 2015 U.S. Dist. LEXIS 101230, at *1, *3 (N.D. Cal. Aug. 3, 2015). (another DaVinci robot case) Marketing: Should We Expect Breach of Warranty Claims?
  • 92. Is AI a Person?
  • 93. © 1994-2023 Telehealth.org, LLC All rights reserved. Is AI a Person? Of course not.. • Artificial agents lack self-consciousness, human-like intentions, ability to suffer, rationality, autonomy, understanding, and social relations deemed necessary for moral personhood. But:
  • 94. © 1994-2023 Telehealth.org, LLC All rights reserved. Is AI a Person? But: • Could serve useful cost-spreading and accountability functions. • EU Parliament, 2017: Recognizing autonomous robots as “having the status of electronic persons responsible for making good any damage they may cause”.  Compulsory insurance scheme
  • 95. © 1994-2023 Telehealth.org, LLC All rights reserved. • Opponents • Harm caused by even fully autonomous technologies is generally reducible to risks attributable to natural persons or existing categories of legal persons. • Even limited AI personhood (corps, e.g.) will require robust safeguards such as having funds or assets assigned to the AI person.
  • 96. Will Plaintiffs be Able to Impose Common Enterprise Liability with AI?
  • 97. Example: Hall v. Du Pont, 345 F.Supp. 353 (E.D.N.Y. 1972)
  • 98. © 1994-2023 Telehealth.org, LLC All rights reserved. 98 • 1955-’59: Blasting caps injured 13 kids, 12 incidents, 10 states. • Claim: Failure to warn. • Ds: 6 cap mfrs + TA. • Evidence: Acting independently, Ds adhered to industry-wide safety standard; delegated labeling to TA; industry-wide cooperation in the manufacture and design of blasting caps. • Held: If Ps could show made ≥ 1 D mfr made the caps, burden of proof on causation would shift to Ds. Example: Hall Du Pont, 345 F.Supp. 353 (E.D.N.Y. 1972)
  • 99. © 1994-2023 Telehealth.org, LLC All rights reserved. 99 • Theory: Clinicians, manufacturers of clinical AI systems, and hospitals that employ the systems are engaged in a common enterprise for tort liability purposes.  As members of common enterprise, could be held jointly liable.  Used where Ds strategically formed and used corporate entities to violate consumer protection law. E.g., Fed. Trade Comm'n v. Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019) (corporations were considered to be functioning jointly as a common enterprise) (cont’d)
  • 100. How Can We Defend Ourselves Against Claims?
  • 101. © 1994-2023 Telehealth.org, LLC All rights reserved. 101 • Compliance with FDA regulations: Preemption. • Policy: No product liability claim encompasses the unpredictable, autonomous machine-mimicking-human behavior underlying AI’s medical decision-making.  Unpredictability of autonomous AI is not a bug, but a feature. How Can We Defend Ourselves Against Claims?
  • 102. © 1994-2023 Telehealth.org, LLC All rights reserved. 102 • Software is not a Product.  Rodgers v. Christie, 795 F. App'x 878, 878-79 (3rd Cir. 2020): Public Safety Assessment (PSA), an algorithm that was part of the state's pretrial release program, was not a product, so product liability for the murder of a man by a killer on pre-trial release did not lie. 1. Not disseminated commercially. 2. Algorithm was neither “tangible personal property” nor tenuously “analogous to” it. How Can We Defend Ourselves Against Claims?
  • 103. © 1994-2023 Telehealth.org, LLC All rights reserved. 103 • Breach of warranty: Privity  Typically the clinician, and not the patient, purchased system. • Product misuse, modification: Progress notes, e.g.  Seller does not know specifics of these additional records or how algorithm developed following provider’s use. • LI doctrine How Can We Defend Ourselves Against Claims?
  • 104. Will AI Put Me Out of Work?
  • 105. Will AI Put Me Out of Work? • ChatGPT can outperform 1st and 2nd year medical students in answering challenging clinical care exam questions. • Law students: Similar. • But: Probably not.
  • 106. (cont’d) • John Halamka: “Generative AI is not thought, it's not sentience.” • Most, if not all, countries are experiencing severe clinician shortages.  Shortages are only predicted to get worse in the U.S. until at least 2030.
  • 107. (cont’d) • AI-infused precision health tools might well be essential to improving the efficiency of care. • AI might help burn-out: ease the day-to-day weariness, lethargy, and delay of reviewing patient charts. • The day may come when the SOC requires use of AI.
  • 108. Can we Get Paid for Using AI?
  • 109. © 1994-2023 Telehealth.org, LLC All rights reserved. 109 • Consider a pathology over-read for an in-patient: • Whether hospital is in- or out-of-network for patient's insurance • Whether patient's insurer deems AI to be “medically necessary” • If in-network, what is the negotiated fee for this specific intervention between this hospital and this patient's insurer • Whether deal pays for hospitalization per diem or on Diagnosis Related Group (DRG) basis • AI might add nothing to charge • What percentage of co-insurance the patient must pay • How much of the deductible the patient will have met by end of this episode of care. Can We Get Paid for Using AI?
  • 110. © 1994-2023 Telehealth.org, LLC All rights reserved. 110 Consider an outpatient setting: • Whether the outpatient facility is in or out-of-network for the patient's insurer. • Whether the facility is owned by a hospital.  If hospital-owned, may add a “facilities fee”. • Whether this patient's insurer deems the AI to be “medically necessary”. • Negotiated fee schedule between facility and the patient's insurer. • How much of the deductible the patient will have met by the conclusion of this episode of care. Can We Get Paid for Using AI?
  • 111. © 1994-2023 Telehealth.org, LLC All rights reserved. 111 • Provided for "medically necessary" care. • Not: experimental treatments or devices • Slow governmental adoption: The telehealth model. Can We Get Paid for Using AI?
  • 112. © 1994-2023 Telehealth.org, LLC All rights reserved. 112 • 9/20: CMS approved the 1st add-on payment up to $1,040, + inpatient hospitalization costs -for use of software to help detect strokes by Viz.ai • Whether a 43-patient study used to support the company’s claim of clinical benefit was large enough to warrant the added reimbursement? Can We Get Paid for Using AI?
  • 113. Can AI Detect or Prevent Fraud?
  • 114. Can AI Detect or Prevent Fraud? • One large health insurer reported a savings of $1 billion annually through AI-prevented FWA. • Fed. Ct App: Company’s use of AI for prior auth and utilization management services to MA and Medicaid managed care plans is subject to qualitative review that may result in liability for the AI-using entity.  US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
  • 115. Can Providers Use AI to Cheat?
  • 116. Does AI Infringe Copyright?
  • 117. J. DOE 1 et al. v. GitHub, Inc. et al., Case No. 4:22-cv-06823-JST (N.D. Cal. 2022): • Ps: They and class own copyrighted materials made available publicly on GitHub. • Ps: Representing class, assert 12 causes of action, including violations of Digital Millennium Copyright Act, California Consumer Privacy Act, and breach of contract.
  • 118. Claim: • Defendants' OpenAI's Codex and GitHub's Copilot generate suggestions nearly identical to code scraped from public GitHub repositories, without giving the attribution required under the applicable license.
  • 119. Defenses: 1. Standing. Did these Plaintiffs suffer injury? 2. Intent: Copilot, as a neutral technology, cannot satisfy DMCA’s § 1202's intent and knowledge requirements.
  • 121. What Other Issues Should We Consider?
  • 122. © 1994-2023 Telehealth.org, LLC All rights reserved. • Ownership of data • Antitrust  Algorithmic pricing can be highly competitive.  But competitors could use the same software to collude.
  • 123. Does AI Engage in Invidious Discrimination?
  • 124. © 1994-2023 Telehealth.org, LLC All rights reserved. 124 Training data key: • A facial recognition AI software was unable to accurately identify > 1/3 of BFs in a photo lineup.  Algorithm was trained on a majority male and white dataset. Does AI Engage in Invidious Discrimination?
  • 125. © 1994-2023 Telehealth.org, LLC All rights reserved. 125 Optum: • Algorithm to identify high-risk patients to inform fund allocation. Used health care costs to make predictions.  Only 17.7% of black patients were identified as high-risk; true number should have been ~ 46.5%.  Spending for black patients lower than for white patients owing to “unequal access to care”. Does AI Engage in Invidious Discrimination?
  • 126. © 1994-2023 Telehealth.org, LLC All rights reserved. 126 • Julia Angwin et al., “Machine Bias,” ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments- in-criminal-sentencing • Emily Berman, “A Government of Laws and Not of Machines,” 98 B.U.L. Rev. 1278, 1315, 1316 (2018) • Karni Chagal-Feferkorn, “The Reasonable Algorithm,” U. Ill. J.L. Tech. & Pol'y (forthcoming 2018) • Duke Margolis Center for Health Policy, “Current State and Near- Term Priorities for AI-Enabled Diagnostic Support Software in Health Care” (2019) References
  • 127. © 1994-2023 Telehealth.org, LLC All rights reserved. 127 • Cade Metz and Craig S. Smith, “Warnings of a Dark Side to A.I. in Health Care,” NY Times (3/21/19) • Daniel Schiff and Jason theBorenstein, “How Should Clinicians Communicate With Patients About Roles of Artificially Intelligent Team Members?” 21(2) AMA Journal of Ethics E138-145 (Feb. 2019) • Nicolas P. Terry, “Appification, AI, and Healthcare's New Iron Triangle,” [Automation, Value, and Empathy] 20 J. Health Care L. & Pol'y 118 (2018) • Wendell Wallach, A Dangerous Master 239-43 (2015). Andrew Tutt, “An FDA for Algorithms,” 69 Admin. L. Rev. 83, 104 (2018) References