Call Girls Service In Shyam Nagar Whatsapp 8445551418 Independent Escort Service
Doctors’ and patients’ support of automated decision-making in healthcare
1. 5/15/2023
1
Doctors’ and patients’ support of automated
decision-making in healthcare
Authors
2
Stanislav Ivanov
stanislav.ivanov@vumk.eu
Teodor Dimitrov
2016041@vumk.eu
2. 5/15/2023
2
Rationale
3
Rationale
• Artificial intelligence has been actively applied in healthcare (Bohr &
Memarzadeh, 2020; Shaikh et al., 2023; Yu, Beam & Kohane, 2018).
• One of the most important areas of application of AI is in the field of
decision-making (Duan, Edwards & Dwivedi, 2019).
• In healthcare, this includes a wide scope of decisions that relate to
interpreting medical test results (e.g. results from blood counts,
medical imaging such as X-rays/scans, electrodcadiograms, biopsies,
etc.), putting a diagnosis based on the medical test results, prescribing
a surgical treatment, diet, medication, etc.
4
3. 5/15/2023
3
Rationale
• Research so far has exclusively focused on the perspective of doctors
and the role of AI in improving the efficiency and the quality of their
work (Martinho, Kroesen & Chorus, 2021; Topol, 2019; York et al.,
2023).
• However, while AI helps doctors take better medical decisions, it is the
patients who bear the consequences of these decisions. A wrong
recommendation or decision by AI that was overlooked by the human
doctor may lead to severe outcomes or be even fatal for the patient.
Therefore, it is important for patients to have trust in the way medical
decisions concerning them are taken.
5
Automated decision-making approaches
6
4. 5/15/2023
4
Automated decision-making
• Artificial autonomous agents (AAs) - ‘software programs which respond to
states and events in their environment independent from direct instruction
by the user or owner of the agent, but acting on behalf and in the interest of
the owner’ (Bösser, 2001)
• Automated decision-making approaches (Ivanov, 2023)
▫ Human only
▫ Human-in-the-loop
▫ Human-on-the-loop
▫ Human-out-of-the-loop
7
AA’s involvement in the decision-making process
‘Human only’ approach
8
• All decisions are taken and implemented by humans.
5. 5/15/2023
5
AA’s involvement in the decision-making process
‘Human-in-the-loop’ approach
9
AA
Task
Decision
Recommendation
• The AA recommends a decision but it is up to the
human to accept it or not.
Principal-agent
relationships
Dependence on the AA Human involvement Responsibility for the
decision outcome
None Low Full Full responsibility of the
human decision-maker
AA’s involvement in the decision-making process
‘Human-on-the-loop’ approach
10
• The AA takes and implements a decision but the
human who is the principal in the relationship
can always intervene and override it as (s)he
considers appropriate.
Principal-agent
relationships
Dependence on the AA Human involvement Responsibility for the
decision outcome
The principal can override
a decision of the agent if
deemed inappropriate
Medium Minimal to medium Full responsibility of the
human decision-maker
AA Task
Decision
Intervention
6. 5/15/2023
6
AA’s involvement in the decision-making process
‘Human-out-of-the-loop’ approach
11
• The human is effectively shielded from the
decision-making process. The AA takes and
implements autonomous decisions without any
human intervention.
Principal-agent
relationships
Dependence on the AA Human involvement Responsibility for the
decision outcome
The principal has no
control over the decision
of the agent and its
implementation.
High (high level of trust
required – trust in the
design of the AI, in its
decision-making
algorithms, in the
implementation of the
decision.)
None Fuzzy responsibility
shared among human
employees involved in
process design and
software development.
AA Task
Decision
Automated decision-making approaches
Criteria for selection
12
Ivanov (2023)
7. 5/15/2023
7
Methodology
13
Methodology
• Data collected during the period January-March 2023 in Bulgaria through
an online questionnaire developed on Qualtrics
• The sample includes 424 respondents (113 with medical education and 311
without medical education).
• The list of medical decisions was limited to a manageable number (21) but
was diverse as well. It included some basic organisational decisions (e.g.
booking an appointment with a doctor, and appointment of an attending
doctor team), interpretation of medical test results (4 of the most common
tests), diagnoses (12 decisions on a system level rather than on disease or
organ levels to avoid a too long questionnaire), and prescription
(medication, treatment, and diet).
• The list of the decisions was consulted with 3 medical doctors.
14
8. 5/15/2023
8
Methodology
• Block 1: Demographic information about the respondents.
• Block 2: Level of knowledge about AI and healthcare.
• Block 3: Attitudes towards AI. It included 5 positive and 5 negative societal
impacts of AI. A 5-point level of agreement scale was used (reverse coding
was applied for the statements about the negative impacts of AI).
• Block 4: Perceptions about the balance between the benefits and the risks if
AI takes 21 specific medical decisions (coded as 1-Risks outweigh
significantly the benefits, 3-Risks and benefits are approximately equal, 5-
Benefits outweigh significantly the risks).
• Block 5: Preferred decision-making approach (1-Human-only, 2-Human-in-
the-loop, 3-Human-on-the-loop, and 4-Human-out-of-the-loop).
15
Key results
16
9. 5/15/2023
9
Key results
• As a whole, respondents consider that the risks of using AI are higher than
the benefits only for decisions related to Diagnosis regarding psychiatric
conditions (m=2.43). For the operational decision of Booking an
appointment with a doctor (m=4.48) the benefits outweigh significantly the
risks. Between these two extremes, the benefits are perceived as
approximately equal to or slightly higher than the risks for the other
decisions.
17
Key results
• Respondents overwhelmingly prefer the Human-in-the-loop and
Human-on-the-loop approaches (often over 90% of them chose either
of them). Therefore, they are not ready yet to delegate decision-
making to AI (doctors) or accept a medical decision taken entirely by
an AI without human control (patients).
• There are no differences between the perceptions of respondents with
and without medical education towards the balance between benefits
and risks if AI takes medical decisions and the preferred decision-
making approach. All but one of the 42 Chi-square test values are not
statistically significant.
18
10. 5/15/2023
10
Key results
• The cluster analysis (based on the answers to the attitudinal
statements in Block 3) revealed the existence of two clusters with
approximately equal sizes. Cluster 1 includes 208 respondents who are
more sceptical towards AI, put greater emphasis on the risks than the
benefits of AI in decision-making and prefer less involvement of AI in
medical decision-making. The other 216 respondents are more open to
AI and more eager to trust it for medical decisions.
19
Key results
• The perceived balance between the risks and benefits, if decisions are
taken by AI, is strongly and positively correlated to the preferred
decision-making approach (ρ=0.949, p<0.001). This is further
supported by Figure 1 which shows the relationship between the two
means of all 21 individual decisions.
20
12. 5/15/2023
12
Conclusion
23
ROBONOMICS:
The Journal of the Automated Economy
• Published by Zangador Research Institute
• First issue in 2021
• 1 continuous volume per year
• Platinum open access
• Not indexed yet
• AI-generated articles are welcome!
• https://journal.robonomics.science
24
13. 5/15/2023
13
References
• Bohr, A., & Memarzadeh, K. (Eds.). (2020). Artificial intelligence in healthcare. London: Academic Press.
• Bösser, T. (2001). Autonomous Agents. Smelser, N. J. and Baltes, P. B. (Eds.). International Encyclopedia of the Social and Behavioral Sciences (pp. 1002-1006).
Pergamon, https://doi.org/10.1016/B0-08-043076-7/00534-9.
• Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda.
International Journal of Information Management, 48, 63-71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021
• Ivanov, S. (2023). Automated decision-making. Foresight, 25(1), 4-19, doi: 10.1108/FS-09-2021-0183
• Jiang, L., Wu, Z., Xu, X., Zhan, Y., Jin, X., Wang, L., & Qiu, Y. (2021). Opportunities and challenges of artificial intelligence in the medical field: Current application,
emerging problems, and problem-solving strategies. Journal of International Medical Research, 49(3), 03000605211000157.
• Martinho, A., Kroesen, M., & Chorus, C. (2021). A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence. Artificial Intelligence
in Medicine, 121, 102190. https://doi.org/10.1016/j.artmed.2021.102190
• Shaikh, T. A., Hakak, S., Rasool, T., & Wasid, M. (Eds.). (2023). Machine Learning and Artificial Intelligence in Healthcare Systems: Tools and Techniques. CRC
Press.
• Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56.
• York, T. J., Raj, S., Ashdown, T., & Jones, G. (2023). Clinician and computer: a study on doctors’ perceptions of artificial intelligence in skeletal radiography. BMC
Medical Education, 23(1), 1-10. https://doi.org/10.1186/s12909-022-03976-6
• Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2(10), 719-731. https://doi.org/10.1038/s41551-
018-0305-z
25
THANK YOU FOR THE
ATTENTION!
QUESTIONS?
26