I do not have enough information to comment on any specific litigation. In general, as AI systems collect and use more personal data, privacy issues will likely continue to be explored and addressed through legislation and litigation. Transparency about data practices and meaningful user consent will be important
Practicing Over State Lines 1.26.24.pptxMarlene Maheu
Â
More Related Content
Similar to I do not have enough information to comment on any specific litigation. In general, as AI systems collect and use more personal data, privacy issues will likely continue to be explored and addressed through legislation and litigation. Transparency about data practices and meaningful user consent will be important
HIPAA Guidelines and Electronic Communication: What Healthcare Professionals ...Conference Panel
Â
Similar to I do not have enough information to comment on any specific litigation. In general, as AI systems collect and use more personal data, privacy issues will likely continue to be explored and addressed through legislation and litigation. Transparency about data practices and meaningful user consent will be important (20)
HáťC TáťT TIáşžNG ANH 11 THEO CHĆŻĆ NG TRĂNH GLOBAL SUCCESS ÄĂP ĂN CHI TIáşžT - CẢ NÄ...
Â
I do not have enough information to comment on any specific litigation. In general, as AI systems collect and use more personal data, privacy issues will likely continue to be explored and addressed through legislation and litigation. Transparency about data practices and meaningful user consent will be important
2. Joseph P. McMenamin, MD, JD, FCLM
Joe McMenamin is a partner at Christian & Barton in
Richmond, Virginia. His practice concentrates on digital health
and on the application of AI in healthcare.
He is an Associate Professor of Legal Medicine at Virginia
Commonwealth University and Board-certified in Legal
Medicine.
3. Marlene M. Maheu, PhD
Marlene Maheu, PhD has been a pioneer in telemental health
for three decades.
With five textbooks, dozens of book chapters, and journal
articles to her name, she is the Founder and CEO of the
Telebehavioral Health Institute (TBHI).
She is the CEO of the Coalition for Technology in Behavioral
Science (CTiBS), and the Founder of the Journal for
Technology in Behavioral Science.
5. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠Participants will be able to outline an array of legal and
ethical issues implicated by the use of therapist AI and
ChatGPT.
⢠Name the primary reason ChatGPT is not likely to replace
psychotherapists in our lifetimes.
⢠Outline how to best minimize therapist AI and ChatGPT
ethical risks today.
Learning
Objectives
5
6. Preventing
Interruptions
Maximize your learning by:
⢠Making a to-do list as we go.
⢠Turning on your camera & join the
conversation throughout this
activity.
⢠Muting your phone.
⢠Asking family and friends to stay
away.
We will not be discussing all slides.
7. ⢠Mr. McMenamin speaks neither for any legal client nor for Telehealth.org
⢠Is neither a technical expert nor an Intellectual Property lawyer.
⢠Offers information about the law, not legal advice.
⢠Labors under a dearth of legal authorities specific to AI.
Speaker Disclaimers
8. ⢠Must treat some subjects in cursory fashion only.
⢠Presents theories of liability as illustrations, conceding nothing as to their
validity.
⢠Criticizes no person or entity, nor AI.
⢠In this presentation, neither creates nor seeks to create an attorney-client
relationship with any member of the audience.
Speaker Disclaimers
14. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠Programs like Elicit and Claude can provide advanced
research capabilities that exceed traditional methods.
⢠For example, AI at Elicit can extract information from up to 100
papers and present the information in a structured table.
⢠It can find scientific papers on a question or topic and organize
the data collected into a table.
⢠It can also discover concepts across papers to develop a table
of concepts synthesized from the findings.
1. Information Retrieval and
Research
14
15. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠Ethical Considerations: Ethical research
practices must still apply, ensuring the
retrieved is evidence-based, peer-
to privacy regulations such as HIPAA.
⢠Issues of ChatGPT copyright ownership
considered, as just because a system
does not mean we should.
1. Information Retrieval and
Research
15
16. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠Programs like OpenAI, Bard, Monica, and others can analyze
and detect behavioral health issues and potential diagnoses
from "prompts, "that is, commands, that include short behaviora
descriptions to vast patient datasets.
⢠They can query for signs of substance use, self-harm,
depression, suicidality, etc.
⢠They can also engage brainstorming sessions to explore
various possible diagnoses, which facts to collect or areas to
explore to arrive at a definitive diagnosis.
⢠They can incorporate extensive patient data, including medical
history, psychological assessments, and patient demographics.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
16
17. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠They use natural language processing (NLP) to extract
relevant information from clinical notes, interviews, and
questionnaires.
⢠They can be instructed to incorporate structured data such
as diagnostic codes (ICD-10), medication history, and
desired treatment outcomes.
⢠These chatbots can be given established clinical guidelines
or consensus documents to ask how one's treatment plan
needs to be adjusted to comply with the guidelines.
⢠They can also engage brainstorming sessions to explore
various possible diagnoses, which facts to collect or areas to
explore to arrive at a definitive diagnosis.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
17
18. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠Ethical Considerations: All protected health
information (PHI) must be meticulously
uploading any prompts.
⢠Plus, full transparency must be given to
regarding AI's role in their diagnosis.
⢠Attention to the strong biases inherent to AI
ensure that AI doesn't perpetuate existing
inequalities.
⢠HIPAA privacy and copyright laws must also
These requirements take time and attention.
⢠Practitioners are strongly advised only to
activities after due training.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
18
19. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠These chatbots can develop tailored treatment plans to meet
individual patient needs after considering diagnoses, client or
patient preferences, comorbidities, and responses to previous
treatments.
⢠Ethical Considerations: Legal and ethical standards for
standards for patient privacy, autonomy, and informed consent
must be upheld.
⢠Free ChatGPT systems often publicly announce in their Terms
and Conditions files that they own all information entered into
their systems.
3. Personalized Treatment Plans
19
24. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Accuracy
⢠Traditional View: Prediction
requires analysis of hundreds of
factors: race, sex, age, SES,
medical history, etc.
⢠Record of results? Publication?
⢠Efficacy across races, sexes,
nationalities?
⢠False Positive: Unwanted psych
care?
⢠Users: Wariness enhanced?
ď Barnett and Torous, Ann. Int.
Med. (2/12/19)
26. How AI has helped:
1. Personal Sensing (âDigital Phenotypingâ)
ď Collecting and analyzing data from sensors
(smartphones, wearables, etc.) to identify behaviors,
thoughts, feelings, and traits.
2. Natural language processing
3. Chatbots
ď DâAlfonso, Curr Opin Psychol. 2020;36:112â117.
27. How AI has helped:
4. Machine Learning
ď Predict and classify suicidal thoughts, depression,
schizophrenia with âhigh accuracyâ.
U. Cal and IBM, https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental-health-
opportunities-and-challenges-in-developing-intelligent-digital-therapies/
5. Causation v. Correlation
ď Better prognosis for pneumonia in asthma patients.
28. How AI has helped:
6. Hallucinations
ď NEDAâs Tessa: Harmful diet advice to patients with
eating disorders.
7. Generalizability
ď When training data do not resemble actual data.
ď Watson and chemo.
6. No compassion or empathy
7. No conceptual thinking
8. No common sense
32. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Privacy laws expanding, yet not clear that existing
laws suffice.
Consider California:
1. HIPAA as amended by HITECH
2. Cal. Confidentiality of Medical Information
Act
3. Cal. Online Privacy Protection Act
4. Cal. Consumer Privacy Act
5. Californiaâs Bot Disclosure Law
6. GDPR
ď Yet still not certain the law covers info on
apps.
ď Facial recognition: both privacy and
discrimination laws.
32
33. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
⢠âA person has no legitimate expectation of privacy in
information he voluntarily turns over to third parties.â
ď Smith v. Maryland, 442 U.S. 735 (1979)(pen register);
United States v. Miller, 425 U.S. 435 (1976)
ď Questioned: United States v. Jones, 565 U.S. 400, 417
(2012) (Sotomayor, J., concurring)
36. Do We Need to License AI to Use it in
Healthcare?
37. Do We Need to License AI to
Use it in Healthcare?
⢠Practice of clinical psychology includes but is not limited to: âDiagnosis and
treatment of mental and emotional disordersâ which consists of the appropriate
diagnosis of mental disorders according to standards of the profession and the
ordering or providing of treatments according to need.
ď Va. Code § 54.1-3600
⢠Other professions have similar statutes across the 50 states & territories.
38. Do We Need to License AI to
Use it in Healthcare?
⢠Definitions of medicine, psychology, nursing, etc.:
ď Likely broad enough to encompass AI functions.
⢠An AI system is not human, but if it functions as a HC professional, some propose
licensure or some other regulatory mechanism.
39. Do We Need to License AI to
Use it in Healthcare?
If licensure needed:
⢠If so, in what jurisdiction(s)?
⢠Consider scope of practice.
41. What Does FDA Say About AI in Healthcare?
⢠Regulatory framework is not yet fully developed.
⢠Historical: Drug or device maker wishing to modify product submits proposal,
and supporting data; FDA says yes or no.
⢠FDA recognizes potential for drug development and the impediments that fusty
regulation could erect.
42. What Does Federal Drug Administration (FDA) Say
About AI in Healthcare?
⢠Concerned with transparency (can it be explained? intellectual property) and
security and integrity of data generated; potential for amplifying errors or biases.
⢠FDA urges creation of a risk management plan, and care in choice of training
data, testing, validation.
⢠Pre-determined change control plans.
44. What Types of Clinical Decision Software
(âCDSâ) Will FDA Regulate Most Closely?
45. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
45
FDA Concerns
1. CDS to âinform clinical management for serious or critical situations or
conditionsâ especially where the health care provider cannot independently
evaluate basis for recommendation.
2. CDS functions intended for patients to inform clinical management of non-
serious conditions or situations, and not intended to help patients evaluate
basis for recommendations.
3. Software that uses patientâs images to create treatment plans for health care
provider review for patients undergoing RT with external beam or
brachytherapy.
54. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
⢠For the court.
⢠In health care, duty arises from professional relationship.
ď Can AI have such a relationship?
ď Consulting physician who does not interact with the
patient owes no duty to that patient.
See Irvin v. Smith, 31 P.3d 934, 941 (Kan. 2001);
St. John v. Pope, 901 S.W.2d 420, 424 (Tex. 1995)
58. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
58
⢠HCP: Reasonableness
ď Can AI ever be unreasonable?
ď Is the HCP relying on AI immune from
liability?
ď Higher SOC for HCP using AI?
ď Will AI endanger state standards of
care?
⢠Will res ipsa play a role?
ď Probably not if the harm is
unexplainable, untraceable, and rare.
⢠Nor can P establish exclusive control
ď But what about the auto pilot cases?
60. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
60
⢠Foreseeability: A precondition of a finding of negligence.
ď Law expects actor to take reasonable steps to reduce the risk of
foreseeable harms.
⢠Software developer cannot predict how unsupervised AI will solve
the tasks and problems it encounters.
ď Machine teaches itself how to solve problems in unpredictable
ways.
ď No one knows exactly what factors go into AI systemâs decisions
⢠The unforeseeability of AI decisions is itself foreseeable.
Are AI Errors Foreseeable?
61. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
61
⢠Computational models to generate recommendations are opaque.
ď Algorithms may be non-transparent because they rely on rules we
humans cannot understand.
ď No one, not even programmers, knows what factors go into ML.
⢠AI's solution may not have been foreseeable to a human. Even the
human who designed the AI.
ď Does that defeat a claim of duty?
Are AI Errors Foreseeable?
62. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
62
⢠In a black-box AI system, the result of an AIâs decision may not have
been foreseeable to its creator or user.
ď So, will an AI system be immune from liability?
ď Will its creator?
Are AI Errors Foreseeable?
64. What if AI Recommends Non-standard
Treatment?
⢠The progress problem: Arterial blood gas monitoring in premature newborns
circa 1990.
⢠Non-standard advice: Proceed with caution.
ď The tension between progress and tort law.
66. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
⢠Can AI be my agent?
ď No ability to negotiate the scope of authorization.
ď Cannot dissolve agent-principal relationship.
ď Cannot renegotiate its terms.
ď An agent can refuse agency; A principal can refuse to
be the master.
⢠Agency law does contemplate that the agent will use her
discretion in carrying out the principalâs tasks.
72. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
72
⢠Hospitals: Large investments in robotic
systems, e.g.
ď Procedures more expensive.
ď By shifting resident teaching time
from standard laparoscopy to robotic
surgery, we may produce âhigh-costâ
surgeons whom insurers will penalize.
⢠Damage to the professional
relationship?
ď The rapport problem.
73. Does the Law Require the Patientâs
Informed Consent to Use of AI in Health
Care?
74. Does the Law Require the Patientâs
Informed Consent to Use of AI in Health
Care?
⢠Traditional:
ď âEvery human being of adult years and sound mind has a right to
determine what shall be done with his own bodyâ
Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.)
⢠AI: What disclosures are required?
75. (contâd)
⢠Explain how AI works?
ď What does âinformedâ mean where no-one knows how black-
box AI works?
⢠Whether the AI system was trained on a data set representative
of a particular patient population?
⢠Comparative predictive accuracy and error rates of AI system
across patient subgroups?
⢠Roles human caregivers and the AI system will play during
each part of a procedure?
76. (contâd)
⢠Whether a medical technologist or pharmacist influenced an
algorithm?
⢠Compare results with AI and human approaches?
ď What if there are no data?
⢠What if the patient doesnât want to know?
⢠Providerâs financial interest in the AI used?
⢠Disclose AI recommendations HCP disapproves, or COIs?
77. (contâd)
⢠Pedicle screw litigation: Used off-label
ď At present, nearly all AI is used off-label.
⢠Investigative nature of the device's use?
ď Rights of subjects in clinical trials?
⢠Experimental procedures: âmost frequent risks and hazardsâ will
remain unknown until the procedure becomes established.
79. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
79
⢠A creature of state law.
ď Theories of liability sound in negligence, strict liability, or breach of
warranty.
⢠Responsibility of a manufacturer, distributor, or seller of a defective
product.
ď Is AI a âproductâ or a service?
ď The law has traditionally held that only personal property in
tangible form can be considered âproducts.â
ď The law has traditionally considered software to be a service.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
80. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
80
⢠Claimant must prove the item that caused the injury was defective at
the time it left the sellerâs hands.
ď By definition, ML changes the product over time.
⢠Suppose an AI system is used to detect abnormalities on MRIs
automatically and is advertised as a way to improve productivity in
analyzing images,
ď No problem interpreting high-resolution images but
ď Fails with images of lesser quality.
Likely: A products liability claim for both negligence and failure
to warn.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
81. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
81
⢠No matter how good the algorithm is, or how much better it is than a
human, it will occasionally be wrong.
ď Exception to strict liability for unavoidably unsafe products.
(Restatement)
⢠Imposing strict liability: Would likely slow down or cease production
of this technology.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
83. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Duty to warn: Traditional
⢠Products:
1. Manufacturer knew or should have known that the pro
poses substantial risk to the user.
2. Danger would not be obvious to users.
3. Risk of harm justifies the cost of providing a warning.
⢠Mental Health:
ď Tarasoff v. The Regents of the University of California (1
84. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
⢠LI Rule:
1. Likelihood harm will occur if intermediary does no
pass on the warning to the ultimate user.
2. Magnitude of the probable harm.
3. Probability that the particular intermediary will no
pass on the warning.
4. Ease or burden of the giving of the warning by th
manufacturer to the ultimate user.
86. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
⢠Causation will often be tough in AI tort cases.
⢠Demonstrating the cause of an injury: Already hard in health
care.
ď Outcomes frequently probabilistic rather than deterministic
⢠AI models: Often nonintuitive, even inscrutable.
ď Causation even more challenging to demonstrate.
87. Š
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
⢠No design or manufacturing flaw if robot involved in
an accident was properly designed, but based on the
structure of the computing architecture, or the
learning taking place in deep neural networks, an
unexpected error or reasoning flaw could have
occurred.
ď Mracek v Bryn Mawr Hospital, 610 F. Supp. 2d
401 (E.D. Pa. 2009), aff âd, 363 Fed. Appx. 925, 927
(3d Cir. 2010)
89. Who is an Expert?
⢠Trial Court: Cardiologist not qualified to testify on weight loss drug combo that
proprietary software package recommended because doctor is not a software
expert.
ď Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018)
(on appeal, reversed)
90. Who is an Expert?
⢠MD who had performed many robotic surgeries not qualified on causation for
want of programming expertise.
ď Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED
complicating robotic prostatectomy)
92. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
92
⢠A warranty may arise by an affirmation of fact or a promise made by
seller relating to the product. See U.C.C. § 2-313.
ď Need not use special phrases or formal terms (âguaranteeâ;
âwarrantyâ)
⢠Promotion of an AI system as a superior product may create a cause
of action for breach of warranty.
ď Darringer v. Intuitive Surgical, Inc., No. 5:15-cv-00300-RMW,
2015 U.S. Dist. LEXIS 101230, at *1, *3 (N.D. Cal. Aug. 3, 2015).
(another DaVinci robot case)
Marketing: Should We Expect
Breach of Warranty Claims?
94. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Is AI a Person?
Of course not..
⢠Artificial agents lack self-consciousness,
human-like intentions, ability to suffer,
rationality, autonomy, understanding, and
social relations deemed necessary for
moral personhood.
But:
95. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Is AI a Person?
But:
⢠Could serve useful cost-spreading and
accountability functions.
⢠EU Parliament, 2017: Recognizing
autonomous robots as âhaving the status
of electronic persons responsible for
making good any damage they may
causeâ.
ď Compulsory insurance scheme
96. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
⢠Opponents
⢠Harm caused by even fully autonomous
technologies is generally reducible to
risks attributable to natural persons or
existing categories of legal persons.
⢠Even limited AI personhood (corps, e.g.)
will require robust safeguards such as
having funds or assets assigned to the AI
person.
99. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
99
⢠1955-â59: Blasting caps injured 13 kids, 12 incidents, 10 states.
⢠Claim: Failure to warn.
⢠Ds: 6 cap mfrs + TA.
⢠Evidence: Acting independently, Ds adhered to industry-wide safety
standard; delegated labeling to TA; industry-wide cooperation in the
manufacture and design of blasting caps.
⢠Held: If Ps could show made ⼠1 D mfr made the caps, burden of
proof on causation would shift to Ds.
Example: Hall Du Pont, 345 F.Supp.
353 (E.D.N.Y. 1972)
100. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
100
⢠Theory: Clinicians, manufacturers of clinical AI systems, and
hospitals that employ the systems are engaged in a common
enterprise for tort liability purposes.
ď As members of common enterprise, could be held jointly liable.
ď Used where Ds strategically formed and used corporate entities to
violate consumer protection law. E.g., Fed. Trade Comm'n v.
Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019)
(corporations were considered to be functioning jointly as a common
enterprise)
(contâd)
102. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
102
⢠Compliance with FDA regulations: Preemption.
⢠Policy: No product liability claim encompasses the unpredictable,
autonomous machine-mimicking-human behavior underlying AIâs
medical decision-making.
ď Unpredictability of autonomous AI is not a bug, but a feature.
How Can We Defend Ourselves
Against Claims?
103. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
103
⢠Software is not a Product.
ď Rodgers v. Christie, 795 F. App'x 878, 878-79 (3rd Cir. 2020):
Public Safety Assessment (PSA), an algorithm that was part of the
state's pretrial release program, was not a product, so product liability
for the murder of a man by a killer on pre-trial release did not lie.
1. Not disseminated commercially.
2. Algorithm was neither âtangible personal propertyâ nor
tenuously âanalogous toâ it.
How Can We Defend Ourselves
Against Claims?
104. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
104
⢠Breach of warranty: Privity
ď Typically the clinician, and not the patient, purchased system.
⢠Product misuse, modification: Progress notes, e.g.
ď Seller does not know specifics of these additional records or how
algorithm developed following providerâs use.
⢠LI doctrine
How Can We Defend Ourselves
Against Claims?
106. Will AI Put Me Out of Work?
⢠ChatGPT can outperform 1st and 2nd year medical students in
answering challenging clinical care exam questions.
⢠Law students: Similar.
⢠But: Probably not.
107. (contâd)
⢠John Halamka: âGenerative AI is not thought, it's not
sentience.â
⢠Most, if not all, countries are experiencing severe clinician
shortages.
ď Shortages are only predicted to get worse in the U.S. until at
least 2030.
108. (contâd)
⢠AI-infused precision health tools might well be essential to
improving the efficiency of care.
⢠AI might help burn-out: ease the day-to-day weariness,
lethargy, and delay of reviewing patient charts.
⢠The day may come when the SOC requires use of AI.
110. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
110
Consider an outpatient setting:
⢠Whether the outpatient facility is in or out-of-network for the
patient's insurer.
⢠Whether the facility is owned by a hospital.
ď If hospital-owned, may add a âfacilities feeâ.
⢠Whether this patient's insurer deems the AI to be âmedically
necessaryâ.
⢠Negotiated fee schedule between facility and the patient's insurer.
⢠How much of the deductible the patient will have met by the
conclusion of this episode of care.
Can we Get Paid for Using AI?
111. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
111
⢠Provided for "medically necessary" care.
ď Not: experimental treatments or devices
⢠Slow governmental adoption: The telehealth model.
⢠9/20: CMS approved the 1st add-on payment up to $1,040, + inpatient
hospitalization costs -for use of software to help detect strokes by
Viz.ai
ď Whether a 43-patient study used to support the companyâs claim of
clinical benefit was large enough to warrant the added
reimbursement?
Can we Get Paid for Using AI?
619-255-2788
619-255-2788
113. Can AI Detect or Prevent Fraud?
⢠One large health insurer reported a savings of $1 billion
annually through AI-prevented FWA.
⢠Fed. Ct App: Companyâs use of AI for prior auth and utilization
management services to MA and Medicaid managed care
plans is subject to qualitative review that may result in liability
for the AI-using entity.
ď US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
116. J. DOE 1 et al. v. GitHub, Inc. et al., Case
No. 4:22-cv-06823-JST (N.D. Cal. 2022):
⢠Ps: They and class own copyrighted materials made available
publicly on GitHub.
⢠Ps: Representing class, assert 12 causes of action, including
violations of Digital Millennium Copyright Act, California
Consumer Privacy Act, and breach of contract.
117. Claim:
⢠Defendants' OpenAI's Codex and GitHub's Copilot generate
suggestions nearly identical to code scraped from public
GitHub repositories, without giving the attribution required
under the applicable license.
118. Defenses:
1. Standing. Did these Plaintiffs suffer injury?
2. Intent: Copilot, as a neutral technology, cannot satisfy
DMCAâs § 1202's intent and knowledge requirements.
124. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
124
Optum:
⢠Algorithm to identify high-risk patients to inform fund
allocation. Used health care costs to make predictions.
ď Only 17.7% of black patients were identified as high-risk; true
number should have been ~ 46.5%.
ď Spending for black patients lower than for white patients owing
to âunequal access to careâ.
Does AI Engage in Invidious
Discrimination?
125. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
125
⢠Julia Angwin et al., âMachine Bias,â ProPublica (May 23, 2016),
https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing
⢠Emily Berman, âA Government of Laws and Not of Machines,â 98
B.U.L. Rev. 1278, 1315, 1316 (2018)
⢠Karni Chagal-Feferkorn, âThe Reasonable Algorithm,â U. Ill. J.L.
Tech. & Pol'y (forthcoming 2018)
⢠Duke Margolis Center for Health Policy, âCurrent State and Near-
Term Priorities for AI-Enabled Diagnostic Support Software in
Health Careâ (2019)
References
126. Š
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
126
⢠Cade Metz and Craig S. Smith, âWarnings of a Dark Side to A.I. in
Health Care,â NY Times (3/21/19)
⢠Daniel Schiff and Jason theBorenstein, âHow Should Clinicians
Communicate With Patients About Roles of Artificially Intelligent
Team Members?â 21(2) AMA Journal of Ethics E138-145 (Feb.
2019)
⢠Nicolas P. Terry, âAppification, AI, and Healthcare's New Iron
Triangle,â [Automation, Value, and Empathy] 20 J. Health Care L. &
Pol'y 118 (2018)
⢠Wendell Wallach, A Dangerous Master 239-43 (2015). Andrew Tutt,
âAn FDA for Algorithms,â 69 Admin. L. Rev. 83, 104 (2018)
References