SlideShare a Scribd company logo
1 of 111
1
Deliverable 3 - Evaluate Research and Data
Attempt 2
Jamie Raines
Rasmussen College
HSA5000CBE Section 01CBE Scholarly Research and Writing
Caroline Gulbrandsen
9/1/2022
2
Research Question Evaluation
The Credibility of the Data
The research question integrated into this study is related to
how artificial intelligence (AI)
integration in clinical radiology has the potential to disrupt the
industry. According to Becker et al.
(2022), AI is strongly connected to the operations performed in
clinical radiology for better test
results. The technology allows machines to achieve human-level
performance while detecting
tumors during radiology tests. There are suitable improvements
performed in the AI industry using
research that structures technical operations by machines to
validate the Integration of AI
algorithms for patient care. The credibility of the data in the
article is reliable since the authors
researched how healthcare professionals who have used
artificial intelligence have promoted better
health management. The European Society of Radiology has
accredited all the authors due to their
degrees and advanced educational levels (Becker et al., 2022).
In the other article, the authors also
integrate credibility since workers have experience in hospitals
and are at a university level of
education (Mulryan et al., 2022). Most research participants
agreed that AI integration in radiology
information technology (IT) departments has promoted accuracy
and reduced excess time for
setting up systems.
The next article focused on the use of AI for medical imaging,
whereby it is clear that the
demand for AI is constantly progressing. According to Mulryan
et al. (2022), the advancements in
AI have been adverse, causing some radiologists to develop a
negative attitude towards the
industry that could potentially eliminate human jobs. AI has
been found to simulate human brain
capacity, which does not get received well by professionals in
healthcare settings. This indicates
more operations can get performed to validate AI operations
since they are needed for system
management. The article's data was collected from journalists,
radiologists, commercial
Robert Neuteboom
hold advanced degrees in a relevant field - is this what you
mean?
Robert Neuteboom
Robert Neuteboom
presented as credible
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
As I mentioned in my comment on your previous submission,
you need to use these terms separately in your evaluation.
Saying that credibility is reliable does not describe either of
these terms nor does it offer examples. Your claim about
conducting research of existing studies would fall under
credibility. You might also talk about the authors' credentials,
employment, and affiliations -- anything that demonstrates
trustworthiness. Reliability, conversely, has to do with
consistency across items, time, and researchers, inter-rater
alignment, and replicability. Use these terms accurately and
provide specific examples for both.
Robert Neuteboom
Robert Neuteboom
3
representatives, researchers, and non-radiologist doctors, and all
provided different opinions on the
impact of AI in medical imaging. Other health experts also
provided their knowledge as long as
they received advanced education for their professions.
Documentation of the Data
All the articles integrate a high-quality data documentation
process since there are different
topics, use of graphics, and graphs that explain how the
research was conducted. There was a direct
process to determine that the articles were quantitative since
there was a comparison among
different variables. In the article by Becker et al. (2022), the
data analyzed how different
respondents reacted to the value of AI in clinical radiology. The
data was then recorded in tables
under different questions so that there would be an accurate
analysis outcome by finding out the
number of responses that supported or did not support concepts
mentioned in a research question.
In the same article, an example of accurate documentation is a
graph that indicated data from
clinical radiologists on why they did not acquire certified AI-
algorism expertise.
In the second article, Mulryan et al. (2022) offered accurate
data representations using
schematics that integrated different reactions and the number of
participants by applying a logistic
regression model to determine the data's validity. There was an
accurate quantitative research
method by analyzing how different participants were used in the
research and how their answers
got distributed. The study was direct and only required
participants to answer questions during
survey sessions while providing their data for easy
identification. Collecting data from different
radiologists was applied to promote the study's reliability since
the data can get assessed by any
professional and produce a health management standard.
Evaluation of Data Analysis and Interpretation
Robert Neuteboom
promotes
Robert Neuteboom
Robert Neuteboom
be
Robert Neuteboom
Robert Neuteboom
different reactions from participants? Clarify what reactions you
are describing here.
Robert Neuteboom
the
Robert Neuteboom
of outcomes
Robert Neuteboom
, ensuring their contributions were credible.
Robert Neuteboom
insights on the topic
Robert Neuteboom
4
The data collected from the articles support a hypothesis
developed for the study:
conforming to AI practices in radiology is imperative since they
inevitably affect healthcare
delivery. The future of AI appears to be getting more advanced,
especially for the radiology
industry, which integrates a structure for change in handling
system operations. It can be possible
to handle superintelligence operations based on AI's ability to
mimic human clinical radiologists'
behavior. Becker et al. (2022) introduced expertise in handling
AI operations advancements based
on the capability to correlate ideas for generating better data
analysis. The existence of AI for
clinical radiology is a structure connected to the change of
different aspects in the healthcare
environment due to its capability to manage system operations.
There is thus a structure for change
in terms of better machine operations for patient healthcare
improvement. Management of human
beings' acceptance of AI is required for the best procedure of
offering intelligence improvement for
a safe healthcare future (Mulryan et al., 2022). These authors
adopted a document analysis process
by seeking credible sources on how radiology operations get
performed.
Possible Ethical Issues
In conclusion, it would be possible to correlate real -life positive
healthcare outcomes and
the data provided by persons familiar with the area of interest.
There are conditions required to
improve AI operations, including algorithm management and
logic handling, which are imperative
to support healthcare expertise by clinical radiologists as they
learn to perform AI operations.
While performing any personal study, there can be a constricted
method when attempting to
understand how to seek factual data without the integration of
plagiarism of original information.
Another issue can be obtaining informed consent from
professionals in the radiology industry. It is
easy to find final work posted online, yet communicating with
the developers and ensuring they
allow their work to be used in research can be challenging.
Confidentiality is another requirement
Robert Neuteboom
What do you mean here? Clarify.
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
are
Robert Neuteboom
results of improving
Robert Neuteboom
Robert Neuteboom
technology
Robert Neuteboom
Robert Neuteboom
Human beings must accept
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
What is "It" in reference to? Be clear.
Robert Neuteboom
those practices
5
that needs to be analyzed to seek information on how to protect
the original owners of any piece of
work without appearing to steal data. All these possible issues
can be addressed by conducting
thorough research and assessing the required topic.
Robert Neuteboom
Good - yes, generally speaking, these are categories we would
label ethical issues in relation to a study. Now, speak
specifically about your study. You are welcome to use first-
person pronouns here to discuss specific concerns that may arise
in the study you wrote about in Deliverable 1 and 2.
6
References
Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L.
(2022). Current practical experience with
artificial intelligence in clinical radiology: a survey of the
European Society of Radiology.
Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y.
Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C.,
Ryan, D., & McLaughlin, P. et al.
(2022). An evaluation of information online on artificial
intelligence in medical
imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244-
022-01209-4
Pharmacy Sterile and Nonsterile Compounding# Sterile
Compounding Procedures
Question 1 (5 points)
Which chapters of the USP are applicable to nutritional
supplements?
Question 1 options:
A)
Chapters 2000 and later
B)
Chapters 50 through 250
C)
Chapters 1000 and later
D)
Chapters 1500 through 1999
Question 2 (5 points)
When administering a TPN, the bag should be discarded how
often?
Question 2 options:
A)
After it finishes running
B)
Within 12 hours
C)
As ordered by the physician
D)
Within 24 hours
Question 3 (5 points)
In which locations should sterile supplies be removed from their
outer wrappings?
Question 3 options:
A)
In the ante area
B)
At the edge of the hood
C)
In the buffer area
D)
Six inches inside the hood
Question 4 (5 points)
Which of the following is responsible for creating the
compounding record (CR) for each CSP?
Question 4 options:
A)
Physician
B)
Pharmacy Technician
C)
Pharmacist
D)
Compounder
Question 5 (5 points)
Which of the following microbial growth mediums would be
used to sample for fungi bacterial growth in the sterile
environment?
Question 5 options:
A)
Bacteriological agar
B)
Trypticase soy agar
C)
Tryptone glucose extract agar
D)
MacConkey agar
Question 6 (5 points)
When a nonsterile product is used in the preparation of a
Category 2 CSP
Question 6 options:
A)
the assigned BUD is 10 days when refrigerated.
B)
the assigned BUD is 4 days at room temperature.
C)
endotoxin testing must be performed.
D)
sterility testing must be performed.
Question 7 (5 points)
500 mL of 20% Liposyn provides
Question 7 options:
A)
1000 calories per day.
B)
2000 calories per day.
C)
1500 calories per day.
D)
500 calories per day.
Question 8 (5 points)
This critical site must always be exposed to first air and
swabbed with a sterile IPA pad.
Question 8 options:
A)
Syringe tip
B)
Bag port
C)
Needle hub
D)
Syringe plunger
Question 9 (5 points)
A ___ should be placed on a CSP to ensure it hasn't been
compromised during transport.
Question 9 options:
A)
sterile cap
B)
IV port seal
C)
IV port cover
D)
tamper-proof seal
Question 10 (5 points)
The temperature of the refrigerator used to store CSP must be
checked ___ to ensure the stability of the CSP.
Question 10 options:
A)
monthly
B)
daily
C)
hourly
D)
weekly
Question 11 (5 points)
Growth media used for surface sampling must be inverted and
placed in an incubator for ___ at 20° to 25° C followed by an
additional 2 to 3 days at 30° to 35° C.
Question 11 options:
A)
5 days
B)
7 days
C)
14 days
D)
3 days
Question 12 (5 points)
Calculations a technician performs for the CSP should be
located on the
Question 12 options:
A)
master formulation record (MFR).
B)
quality assurance (QA).
C)
compounding record (CR).
D)
standard operating procedure (SOP).
Question 13 (5 points)
In an ISO Class 8 environment, the air contains no more than
___ particles of 0.5 microns per cubic meter.
Question 13 options:
A)
100,000
B)
150,000
C)
200,000
D)
125,000
Question 14 (5 points)
In which environment should documentation of CSPs be
performed?
Question 14 options:
A)
ISO Class 5
B)
ISO Class 9
C)
ISO Class 7
D)
ISO Class 8
Question 15 (5 points)
Surface sampling of the interior of a PEC in a SCA is required
___
Question 15 options:
A)
quarterly.
B)
annually.
C)
every six months.
D)
monthly.
Question 16 (5 points)
Certification of an ISO Class 5, 7, or 8 compounding
environment includes
Question 16 options:
A)
airflow testing, HEPA filter integrity testing, total particle
counts, and smoke studies.
B)
HEPA filter integrity testing, total particle counts, and smoke
studies.
C)
airflow testing, HEPA filter integrity testing, technician
inspection, and smoke studies.
D)
evacuation testing, HEPA filter integrity testing, total particle
counts, and technician inspection.
Question 17 (5 points)
Before withdrawing fluid from a vial using a standard non-
vented needle, which of the following should be done to prevent
creating a vacuum in the vial?
Question 17 options:
A)
Inject a volume of air that's half of the amount of liquid
withdrawn.
B)
Nothing needs to be done beforehand; the fluid can be
withdrawn without adding air.
C)
Inject a volume of air equal to the amount of liquid being
withdrawn.
D)
Inject a volume of air greater than the amount of liquid being
withdrawn.
Question 18 (5 points)
A technician is staging the final IV preparation. This is done for
the pharmacist to _____ the admixture.
Question 18 options:
A)
deliver
B)
administer
C)
dispose of
D)
check
Question 19 (5 points)
___ is an example of a fat source used in parenteral solutions.
Question 19 options:
A)
Normal saline
B)
Liposyn
C)
Dextrose
D)
Aminosyn
Question 20 (5 points)
Visual inspection of the CSP should be performed ___ the final
pharmacist check.
Question 20 options:
A)
during
B)
before
C)
after
D)
while
1
Deliverable 3 - Evaluate Research and Data
Attempt 2
Jamie Raines
Rasmussen College
HSA5000CBE Section 01CBE Scholarly Research and Writing
Caroline Gulbrandsen
9/1/2022
2
Research Question Evaluation
The Credibility of the Data
The research question integrated into this study is related to
how artificial intelligence (AI)
integration in clinical radiology has the potential to disrupt the
industry. According to Becker et al.
(2022), AI is strongly connected to the operations performed in
clinical radiology for better test
results. The technology allows machines to achieve human-level
performance while detecting
tumors during radiology tests. There are suitable improvements
performed in the AI industry using
research that structures technical operations by machines to
validate the Integration of AI
algorithms for patient care. The credibility of the data in the
article is reliable since the authors
researched how healthcare professionals who have used
artificial intelligence have promoted better
health management. The European Society of Radiology has
accredited all the authors due to their
degrees and advanced educational levels (Becker et al., 2022).
In the other article, the authors also
integrate credibility since workers have experience in hospitals
and are at a university level of
education (Mulryan et al., 2022). Most research participants
agreed that AI integration in radiology
information technology (IT) departments has promoted accuracy
and reduced excess time for
setting up systems.
The next article focused on the use of AI for medical imaging,
whereby it is clear that the
demand for AI is constantly progressing. According to Mulryan
et al. (2022), the advancements in
AI have been adverse, causing some radiologists to develop a
negative attitude towards the
industry that could potentially eliminate human jobs. AI has
been found to simulate human brain
capacity, which does not get received well by professionals in
healthcare settings. This indicates
more operations can get performed to validate AI operations
since they are needed for system
management. The article's data was collected from journal ists,
radiologists, commercial
Robert Neuteboom
hold advanced degrees in a relevant field - is this what you
mean?
Robert Neuteboom
Robert Neuteboom
presented as credible
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
As I mentioned in my comment on your previous submission,
you need to use these terms separately in your evaluation.
Saying that credibility is reliable does not describe either of
these terms nor does it offer examples. Your claim about
conducting research of existing studies would fall under
credibility. You might also talk about the authors' credentials,
employment, and affiliations -- anything that demonstrates
trustworthiness. Reliability, conversely, has to do with
consistency across items, time, and researchers, inter-rater
alignment, and replicability. Use these terms accurately and
provide specific examples for both.
Robert Neuteboom
Robert Neuteboom
3
representatives, researchers, and non-radiologist doctors, and all
provided different opinions on the
impact of AI in medical imaging. Other health experts also
provided their knowledge as long as
they received advanced education for their professions.
Documentation of the Data
All the articles integrate a high-quality data documentation
process since there are different
topics, use of graphics, and graphs that explain how the
research was conducted. There was a direct
process to determine that the articles were quantitative since
there was a comparison among
different variables. In the article by Becker et al . (2022), the
data analyzed how different
respondents reacted to the value of AI in clinical radiology. The
data was then recorded in tables
under different questions so that there would be an accurate
analysis outcome by finding out the
number of responses that supported or did not support concepts
mentioned in a research question.
In the same article, an example of accurate documentation is a
graph that indicated data from
clinical radiologists on why they did not acquire certified AI-
algorism expertise.
In the second article, Mulryan et al. (2022) offered accurate
data representations using
schematics that integrated different reactions and the number of
participants by applying a logistic
regression model to determine the data's validity. There was an
accurate quantitative research
method by analyzing how different participants were used in the
research and how their answers
got distributed. The study was direct and only required
participants to answer questions during
survey sessions while providing their data for easy
identification. Collecting data from different
radiologists was applied to promote the study's reliability since
the data can get assessed by any
professional and produce a health management standard.
Evaluation of Data Analysis and Interpretation
Robert Neuteboom
promotes
Robert Neuteboom
Robert Neuteboom
be
Robert Neuteboom
Robert Neuteboom
different reactions from participants? Clarify what reactions you
are describing here.
Robert Neuteboom
the
Robert Neuteboom
of outcomes
Robert Neuteboom
, ensuring their contributions were credible.
Robert Neuteboom
insights on the topic
Robert Neuteboom
4
The data collected from the articles support a hypothesis
developed for the study:
conforming to AI practices in radiology is imperative since they
inevitably affect healthcare
delivery. The future of AI appears to be getting more advanced,
especially for the radiology
industry, which integrates a structure for change in handling
system operations. It can be possible
to handle superintelligence operations based on AI's ability to
mimic human clinical radiologists'
behavior. Becker et al. (2022) introduced expertise in handling
AI operations advancements based
on the capability to correlate ideas for generating better data
analysis. The existence of AI for
clinical radiology is a structure connected to the change of
different aspects in the healthcare
environment due to its capability to manage system operations.
There is thus a structure for change
in terms of better machine operations for patient healthcare
improvement. Management of human
beings' acceptance of AI is required for the best procedure of
offering intelligence improvement for
a safe healthcare future (Mulryan et al., 2022). These authors
adopted a document analysis process
by seeking credible sources on how radiology operations get
performed.
Possible Ethical Issues
In conclusion, it would be possible to correlate real -life positive
healthcare outcomes and
the data provided by persons familiar with the area of interest.
There are conditions required to
improve AI operations, including algorithm management and
logic handling, which are imperative
to support healthcare expertise by clinical radiologists as they
learn to perform AI operations.
While performing any personal study, there can be a constricted
method when attempting to
understand how to seek factual data without the integration of
plagiarism of original information.
Another issue can be obtaining informed consent from
professionals in the radiology industry. It is
easy to find final work posted online, yet communicating with
the developers and ensuring they
allow their work to be used in research can be challenging.
Confidentiality is another requirement
Robert Neuteboom
What do you mean here? Clarify.
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
are
Robert Neuteboom
results of improving
Robert Neuteboom
Robert Neuteboom
technology
Robert Neuteboom
Robert Neuteboom
Human beings must accept
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
What is "It" in reference to? Be clear.
Robert Neuteboom
those practices
5
that needs to be analyzed to seek information on how to protect
the original owners of any piece of
work without appearing to steal data. All these possible issues
can be addressed by conducting
thorough research and assessing the required topic.
Robert Neuteboom
Good - yes, generally speaking, these are categories we would
label ethical issues in relation to a study. Now, speak
specifically about your study. You are welcome to use first-
person pronouns here to discuss specific concerns that may arise
in the study you wrote about in Deliverable 1 and 2.
6
References
Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L.
(2022). Current practical experience with
artificial intelligence in clinical radiology: a survey of the
European Society of Radiology.
Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y.
Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C.,
Ryan, D., & McLaughlin, P. et al.
(2022). An evaluation of information online on artificial
intelligence in medical
imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244-
022-01209-4
1
Deliverable 3 - Evaluate Research and Data
Jamie Raines
Attempt 1
Rasmussen College
HSA5000CBE Section 01CBE Scholarly Research and Writing
Caroline Gulbrandsen
8/26/2022
Running head: RESEARCH QUESTION EVALUATION 2
Research Question Evaluation
The credibility of the Data
The research question integrated into this study is related to
how artificial intelligence (AI) integration in
clinical radiology has the potential to disrupt the industry.
According to Becker et al. (2022), AI is strongly
connected to the operations performed in clinical radiology for
better test results. The technology allows machines
can achieve human-level performance while performing
detection of any tumors during radiology tests. There are
suitable improvements performed in the AI industry using
research that structures technical operations by
machines to validate the Integration of AI algorithms for patient
care. The credibility of the data in the article is
reliable since the authors researched how healthcare
professionals who have used artificial intelligence have
promoted better health management. Most research participants
agreed that AI integration in radiology
information technology (IT) departments has promoted accuracy
and reduced excess time for setting up systems.
The next article focused on the use of AI for medical imaging,
whereby it is clear that the demand for AI is
constantly progressing. According to Mulryan et al. (2022), the
advancements in AI have been adverse, causing
some radiologists to develop a negative attitude towards the
industry that could potentially eliminate human jobs.
AI has been found to simulate human brain capacity, which does
not get received well by professionals in
healthcare settings. This indicates more operations can get
performed to validate AI operations since they are
needed for system management. The article's data was collected
from journalists, radiologists, commercial
representatives, researchers, and non-radiologist doctors, and all
provided different opinions on the impact of AI
in medical imaging.
Documentation of the Data
All the articles integrate a high-quality data documentation
process since there are different topics, use of
graphics, and graphs that explain how the research got
conducted. There was a direct process to determine that the
articles were quantitative since there was a comparison among
different variables. In the article by Becker et al.
(2022), the data analyzed how different respondents reacted to
the value of AI in clinical radiology. The Data then
Robert Neuteboom
was
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
Would you say experts in the field constitute another example
of credibility?
Robert Neuteboom
Okay, so be sure to address matters of credibility and reliability
independently. Your claim about conducting research of
existing studies would fall under credibility. You might also
talk about the authors' credentials, employment, and affiliations
-- anything that demonstrates trustworthiness. Reliability,
conversely, has to do with consistency across items, time, and
researchers, inter-rater alignment, and replicability. Use these
terms accurately and provide specific examples for both.
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
to
Robert Neuteboom
Robert Neuteboom
Capitalize - Credibility
Robert Neuteboom
Remove your running header. These are not necessary in APA
7th Edition.
Robert Neuteboom
Check your margins on this paper. They should be set at one
inch.
Running head: RESEARCH QUESTION EVALUATION 3
got recorded in tables under different questions so that there
would be an accurate analysis outcome by finding out
the number of responses that supported or did not support
concepts mentioned in a research question. In the same
article, an example of accurate documentati on is a graph that
indicated data from clinical radiologists on why they
did not acquire certified AI-algorism expertise.
In the second article, Mulryan et al. (2022) offered accurate
data representations using schematics that
integrated different reactions and the number of participants by
applying a logistic regression model to determine
the Data's validity. The use of key terms was possible upon
opening the document as it was possible to detect that
the application of the quantitative research method was common
by indicating a different number of participants
used in the research and how their answers got distributed. The
study was direct and only required participants to
answer questions during survey sessions while providing their
data for easy identification. Collecting data from
different radiologists was applied to promote the study's
validity.
Evaluation of Data Analysis and Interpretation
The data collected from the articles support a hypothesis
developed for the study: it is imperative to
conform to AI practices in radiology since they affect inevitable
healthcare delivery. The future of AI appears to
be getting more advanced, especially for the radiology industry,
which integrates a structure for change in
handling system operations. It can be possible to handle
superintelligence operations based on AI's ability to
mimic human clinical radiologists' behavior. The authors
introduced the level of expertise in handling AI
operations advancements based on the capability to correlate
ideas for generating better data analysis. The
existence of AI for clinical radiology is a structure connected to
the change of different aspects in the healthcare
environment due to its capability to manage system operations.
There is thus a structure for change in terms of
better machine operations for patient healthcare improvement.
Management of human beings' acceptance of AI is
required for the best procedure of offering intelligence
improvement for a safe healthcare future.
Possible Ethical Issues
Robert Neuteboom
I can't tell which article you are writing about. Describe how
Becker et al. and Mulryan et al. analyze their data. Be clear by
differentiating the two studies.
Robert Neuteboom
inevitably affect
Robert Neuteboom
Robert Neuteboom
So, does this constitute credibility or reliability? Explain.
Robert Neuteboom
Wordy, convoluted sentence. Rework for clarity.
Robert Neuteboom
Robert Neuteboom
Robert Neuteboom
data's - no need to capitalize.
Robert Neuteboom
was
Robert Neuteboom
Robert Neuteboom
Running head: RESEARCH QUESTION EVALUATION 4
In conclusion, it would be possible to correlate real -life positive
healthcare outcomes and the data provided
by persons familiar with the area of interest. There are
conditions required to improve AI operations, including
algorithm management and logic handling, which are all
imperative to support healthcare expertise by clinical
radiologists as they learn to perform AI operations. All these
are appropriate techniques for promoting accurate
system handling as an accurate structure for AI integration into
clinical radiology and medical imaging.
Digitization of healthcare imaging operations affects existent
automation operations that seek to engage people
with the technical environment expertise in operations
structured to integrate better machine learning processes as
set up using clinical radiology expertise. The authors thus were
affected by the influx of too much technology in
healthcare that may seem to undermine the expertise of
healthcare providers. It is thus critical to operate in the
current advanced IT environment using the training of clinical
radiologists instead of solely trusting AI. Applying
social constructs and views from clinical radiologists is thus
imperative to constantly manage AI operations
related to accurate system improvements.
Robert Neuteboom
I think you misread the instructions for this third section. You
are supposed to write about the ethical issues you might
encounter conducting your own study (the one you are writing
about this quarter) and explain how you will address those
issues.
Robert Neuteboom
Why is this entire section centered?
Running head: RESEARCH QUESTION EVALUATION 5
References
Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L.
(2022). Current practical experience with artificial
intelligence in clinical radiology: a survey of the European
Society of Radiology. Insights Into Imaging,
13(1). doi: 10.1186/s13244-022-01247-y.
Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C.,
Ryan, D., & McLaughlin, P. et al. (2022). An
evaluation of information online on artificial intelligence in
medical imaging. Insights Into Imaging, 13(1).
doi: 10.1186/s13244-022-01209-4
Mulryan et al. Insights into Imaging (2022) 13:79
https://doi.org/10.1186/s13244-022-01209-4
O R I G I N A L A R T I C L E
An evaluation of information online
on artificial intelligence in medical imaging
Philip Mulryan1, Naomi Ni Chleirigh2, Alexander T.
O’Mahony2* , Claire Crowley1, David Ryan3,
Patrick McLaughlin4, Mark McEntee2, Michael Maher1,2 and
Owen J. O’Connor1,2
Abstract
Background: Opinions seem somewhat divided when
considering the effect of artificial intelligence (AI) on medi -
cal imaging. The aim of this study was to characterise
viewpoints presented online relating to the impact of AI on the
field of radiology and to assess who is engaging in this
discourse.
Methods: Two search methods were used to identify online
information relating to AI and radiology. Firstly, 34 terms
were searched using Google and the first two pages of results
for each term were evaluated. Secondly, a Rich Search
Site (RSS) feed evaluated incidental information over 3 weeks.
Webpages were evaluated and categorized as having a
positive, negative, balanced, or neutral viewpoint based on
study criteria.
Results: Of the 680 webpages identified using the Google
search engine, 248 were deemed relevant and accessible.
43.2% had a positive viewpoint, 38.3% a balanced viewpoint,
15.3% a neutral viewpoint, and 3.2% a negative view-
point. Peer-reviewed journals represented the most common
webpage source (48%), followed by media (29%), com-
mercial sources (12%), and educational sources (8%).
Commercial webpages had the highest proportion of positive
viewpoints (66%). Radiologists were identified as the most
common author group (38.9%). The RSS feed identified 177
posts of which were relevant and accessible. 86% of posts were
of media origin expressing positive viewpoints (64%).
Conclusion: The overall opinion of the impact of AI on
radiology presented online is a positive one. Consistency
across a range of sources and author groups exists. Radiologists
were significant contributors to this online discussion
and the results may impact future recruitment.
Keywords: Artificial intelligence in radiology, Perspectives on
evolution of radiology, Future impact on the
radiologist, Radiology recruitment, Radiology efficiency
© The Author(s) 2022. Open Access This article is licensed
under a Creative Commons Attribution 4.0 International
License, which
permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit
to the
original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The
images or
other third party material in this article are included in the
article’s Creative Commons licence, unless indicated otherwise
in a credit line
to the material. If material is not included in the article’s
Creative Commons licence and your intended use is not
permitted by statutory
regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder. To view a copy
of this
licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Keypoints
• Consensus?
• An overall positive opinion exists online towards AI
on the future of radiology.
• Radiologists?
• A high proportion of radiologists believe there will be
a positive impact.
Background
Artificial intelligence (AI) involves the use of computer
algorithms to perform tasks typically associated with
human intelligence [1]. The role of AI in medical imag-
ing has progressed to various stages of development,
application and refinement over the past 10–15 years.
Consequentially, publications on AI in medical imaging
have exponentially increased from about 100–150 per
year in 2007–2008 to 700–800 per year in 2016–2017
[2]. Several studies pertaining to dermatology, pathol -
ogy, and ophthalmology have shown the potential and
clinical utility of AI algorithms. For example, skin cancer,
the most diagnosed malignancy worldwide, is primarily
Open Access
Insights into Imaging
*Correspondence: [email protected]
2 University College Cork, Cork, Ireland
Full list of author information is available at the end of the
article
http://orcid.org/0000-0001-9110-2579
http://creativecommons.org/licenses/by/4.0/
http://crossmark.crossref.org/dialog/?doi=10.1186/s13244-022-
01209-4&domain=pdf
Page 2 of 11Mulryan et al. Insights into Imaging (2022)
13:79
diagnosed visually. Deep neural networks (DNN) have
demonstrated equivalence with consultant dermatologist
diagnostic ability [3]. Hence the early evolution of AI has
leaned towards the visual sciences and its application to
radiology an extension of this.
Medical imaging interpretation requires accuracy, pre-
cision, and fidelity. At its essence it is a visual science
whereby the interpreter translates either a single or series
of images into a succinct report to answer a clinical ques-
tion and guide evidence-based management. Studies
report that on average a radiologist must interpret one
image every 3–4 s in an 8-h workday to meet workload
demands [4] and with the compound annual growth rate
(CAGR) of diagnostic imaging estimated to be 5.4% until
2027 [5] increasing workloads are expected. Burnout
has been ubiquitously reported among medical special-
ties (~ 25–60%) [6, 7] with limited solutions being pro-
posed and implemented; thus many key advantages may
be conferred by the incorporation of AI into radiologi -
cal practice. The applications of AI in radiology can be
broadly divided into diagnostic and logistic. Computer-
aided diagnostics (CAD) may facilitate earlier detection
of abnormalities, improve patient outcomes, reduce med-
ico-legal exposure, and decrease radiologist workload.
Logistic improvements would include optimization of
workflow, prompt communication of critical findings and
more efficient vetting and triage systems.
Historically, apprehension has existed concerning
recruitment within the medical and radiological commu-
nity as a result of AI. Focused assessment of individual
stakeholder groups in relation to AI in radiology dem-
onstrated a wide spectrum of opinion. Studies of medi-
cal student perspectives in North America, Europe and
the United Kingdom conveyed heterogenous opinions
on the potential implications of AI on radiology possibly
with geographical variation [8–10]. A recurrent theme
in early studies is the large educational gap in medical
schools regarding the capability, utility, and limitations
of AI. A European multi-centre study of both radiolo-
gists in training and consultants performed in France
[11] demonstrated an overall positive perspective; how-
ever, a majority expressed concerns regarding insufficient
information on AI and its potential implications. Ten
years ago, the end of radiology as a career was being her -
alded. Hence radiology residency applications reduced in
response to concerns about the future of radiology as a
career [12]. Ten years ago, perception probably reflected
local concerns in the absence of experience. It has been
shown that more positive opinions have been expressed
by those medical students with exposure to AI and radi-
ology [9]. The transition from discourse about the poten-
tial of AI to its integration and use should have modified
opinions based on practice and experience.
Therefore, this paper aimed to quantify the proportion
of positive, negative, balanced, and neutral viewpoints
presented on the internet in relation to the impact of AI
on radiology. The purpose of this was to determine the
global and regional perception of AI in radiology, and
thus, conclude as to where the future of radiology may
lie.
Methods
Data collection
Two search methods were used to evaluate information
online relating to artificial intelligence in medical imag-
ing. The first search method screened existing data on
AI in radiology at the time of search. The second method
identified a live stream of articles relating to AI as they
were released on the web. Searches were carried out
independently by two of the investigators.
Thirty-four key search phrases were established (Addi-
tional file 1: Appendix 1). Phrases were generated with
input from a medical student, healthcare professional,
non-consultant hospital doctor and prospective radiol-
ogy trainee. These phrases were than validated by two
consultant radiologists (M.M.M., O.J.O’C). The phrases
were chosen to reflect a broad range of search terms
encompassing a multidisciplinary opinion to the impact
of AI on the radiology service.
Data identification
Existing data
This search was performed on ‘All’ content in the Google
search engine and was conducted over the period 25th
January 2021–7th February 2021. The google search
engine has over 90% of the market share and thus was
felt to be reasonably representative of the population on
a global scale [13]. The google search was performed for
the 34 key phrases in an identical manner. Results were
limited to the English language and open access aca-
demia or where no financial stipulation was required
to access the article. We reviewed the first two pages of
Google results for these searches, as numerous studies on
user behaviour have indicated that 95% of users choose
websites listed on the first page of results, leaving only
5% reviewing results on any subsequent page [14–18].
While date of publication was not a selection criterion
all included articles from the google search were created
within the past 5 years.
Live stream
A Rich Site Summary (RSS) feed search strategy was
used to evaluate the written incident information over
a 3-week period (07/03/21–28/03/21) as a surrogate for
postings on news media and social media. The same
34 key phrases were entered into Google Alerts. This
Page 3 of 11Mulryan et al. Insights into Imaging (2022)
13:79
provided a continuous search for new relevant online
content appearing subsequently. This content was then
analyzed and organized appropriately.
Data sourcing
The source of each relevant post was identified. The
source website was then assigned a sub-type based on the
‘About Us’ section. The source subtypes were segregated
as either journal, media, commercial, education or other
if outside of these categories. For published academia,
it was noted whether it was from a peer- reviewed and/
or indexed journal (PUBMED). Where an identifiable
author existed, it was subtyped into radiologist, journal-
ist, non-radiologist doctor, radiographer and other. The
geographical origin and date of issue was also noted,
where available.
Data categorization
The web pages identified by the dichotomized search
strategy were analyzed by each investigator homog-
enously. Firstly, all Google advertisements were omitted.
Each post was then categorized as either relevant or non-
relevant. Non-relevant posts included those failing to
provide information on AI in medical imaging (such as
a journal calling for abstracts/submissions) or academia
related posts that were not open access, duplicate posts
or posts that were inaccessible.
Relevant posts were divided as either having an over-
all positive, negative, balanced, or neutral viewpoint. The
assessment and categorization of this information was
carried out by two senior authors (M.M.M., O.J.O.C),
both of whom are academic consultant radiologists
working in a large teaching hospital. The assessment was
done in tandem, and the final decision was arrived at by
consensus.
Relevance
Positive
Positive viewpoints were themed as changes brought
about by AI which would result in increased employ-
ment, service expansion, efficiency, fidelity of interpreta-
tion, improved patient care, better quality assurance and
more job satisfaction. Additional file 1: Appendix 2 pro-
vides a sample of positive viewpoints as extracted from
the data of included posts. Webpages that contained
predominantly positive information and concluded with
an overriding positive viewpoint were categorized as
‘Positive’.
Negative
Negative viewpoints were those that displayed a contrary
theme to the positive viewpoint (see Additional file 1:
Appendix 2).
Balanced and neutral
Webpages categorized as ‘Balanced’ listed comparable
amounts of positive and negative points without giv-
ing an overall positive or negative viewpoint. Webpages
categorized as ‘Neutral’ objectively presented informa-
tion relating to artificial intelligence and radiology but
did not discuss how this would impact, be it negatively
or positively, on the field of radiology. The fundamental
difference between the ‘Balanced’ and ‘Neutral’ catego-
ries is that balanced webpages explicitly discussed how
aspects of artificial intelligence would impact the field
of radiology while neutral webpages did not (see Addi -
tional file 1: Appendix 2).
Data analysis
Data compilation and statistical analyses were per-
formed using Microsoft Excel (Microsoft Corporation,
Redmond, Washington, USA) and Google Sheets (1600
Amphitheatre Parkway, Mountain View, California,
United States). Descriptive statistics were used to sum-
marize data. Frequency analyses were performed for
categorical variables.
Results
A total of 680 Google pages relating to AI in medi-
cal imaging were identified. Of these, 561 pages were
deemed relevant and accessible. Duplicate pages were
removed, leaving 248 pages for evaluation.
Forty-three percent (n = 106) of these pages expressed
the overall view that AI would have a positive impact
on the radiologist and the radiology department; 3.2%
(n = 8) presented an overall negative viewpoint; 38.2%
(n = 95) presented a balanced viewpoint and 15.3%
(n = 38) presented a neutral viewpoint (see Fig. 1).
Forty-eight percent (n = 120) of the relevant pages were
from open-access peer-reviewed journals; 30.2% (n = 75)
were from media sources; 12.9% (n = 32) from commer-
cial websites and 8.5% (n = 21) from educational sources.
Table 1. Summarises the allocated categories of origin
and viewpoint conveyed. The type of media source along
with the details of specific commercial company can
be seen in Additional file 1: Appendix 3.1 & 3.2. Com-
mercial web pages had the highest proportion of posi-
tive viewpoints i.e., 66%, followed by media web pages
at 52%, peer-reviewed journals at 37% and educational
web pages at 14%. On the other hand, media web pages
had the greatest proportion of negative viewpoints at
5%, followed by peer-reviewed journals at 3%. Negative
viewpoints were not identified among commercial, edu-
cational, or other sources. Peer-reviewed journals had the
greatest proportion of balanced viewpoints at 48%, while
Page 4 of 11Mulryan et al. Insights into Imaging (2022)
13:79
educational web pages had the greatest proportion of
neutral viewpoints at 43%.
An identifiable named author was displayed on 93%
(n = 230) of web pages, with radiologists responsible for
38.7% (n = 89); journalists represented 20% of authors
(n = 46); doctors working in other specialties repre-
sented 6.9% (n = 16); and radiographers represented 4.8%
(n = 11). Other authors not falling into the aforemen-
tioned categories made up the remaining 29.6% (n = 68).
Researchers, lawyers, and marketing managers were
amongst those in the ‘Other’ category.
Web pages authored by journalists had the highest
percentage of overall positive viewpoints (52%, n = 24).
This was followed by web pages authored by radiolo-
gists (46%, n = 41) and radiographers (45%, n = 5). Web
pages authored by non-radiologist doctors accounted
for the lowest proportion of positive viewpoints (18.8%,
n = 3). Four percent of web pages authored by radi-
ologists (n = 4) or by those falling into the ‘Other’ cat-
egory (n = 3) had negative viewpoints, followed by
web pages authored by journalists at 2% (n = 1). There
were no negative viewpoints identified in web pages
authored by radiographers or non-radiologist doctors.
Those authors falling into the category “Other” had the
highest proportion of balanced viewpoints at 39.7%
(n = 27), while journalists had the greatest proportion
of neutral viewpoints at 34.7% (n = 16). See Additional
file 1: Appendix 4.1 for tabulated summary (Fig. 2).
There were 130 pages authored in North America
expressing 60 positive, 48 balanced, 18 neutral and
4 negative pages of content. In Europe (n = 49), there
were 21 positive, 17 balanced, 9 neutral and 9 negative
pages authored. The United Kingdom had the great-
est number of European authored pages, and these
expressed 9 positive, 10 balanced, 4 neutral and 0 nega-
tive opinions (n = 23). The distribution of the remain-
ing pages from Europe was as follows: Netherlands —6,
Germany—6, Italy—8, Ireland—4, Belgium—5, Nor-
way—1, Denmark—1, Switzerland—5, Austria—1,
Cyprus—3, Europe not specified—9. Finally, a mis-
cellaneous group including: Australia—11; Israel—4;
Asia—12; South America—2, Africa—2; and Not avail-
able—14, expressed 19 positive, 18 balanced, 7 neu-
tral and 2 negative opinions in the pages that were
authored. This frequency data is presented in Table 2
with corresponding percentages in Fig. 3.
Radiologists in North America (n = 42) authored 19
positive, 18 balanced, 3 neutral and 2 negative view -
points. In Europe, radiologists (n = 31) authored 14
positive, 12 balanced, 3 neutral and 2 negative view -
points. UK radiologists authored four pages expressing
Total Data Points
(n=680)
Posi�ve (n=106) Balanced (n=95) Neutral (n=38) Nega�ve
(n=8)
Non Relevant (n=113)
Inaccessible (n=6)
Duplicates (n=313)
Relevant and accessible
(n=248)
Fig. 1 Schematic of google search and results summary
Table 1 Summary of categorization of posts by origin with
percentage
n = 248 Journal Media Commercial Education
n = 120 48.39% n = 75 30.20% n = 32 12.90% n = 21 8.47%
Positive 44 36.67% 39 52.00% 21 65.63% 3 14.29%
Negative 4 3.33% 4 5.33% 0 0.00% 0 0.00%
Balanced 58 48.33% 24 32.00% 4 12.50% 9 42.86%
Neutral 14 11.67% 8 10.67% 7 21.88% 9 42.86%
Page 5 of 11Mulryan et al. Insights into Imaging (2022)
13:79
two positive and two balanced perspectives. These data
are presented in Table 3 and Fig. 4.
The Google Alerts RSS feed identified 5504 new posts
over the 3-week period from 34 search terms. Of the
alerts identified, 177 were deemed relevant and acces-
sible. Sixty-five percent (n = 115) of the posts expressed
an overall positive viewpoint; 11% (n = 20) a balanced
viewpoint; 23% (n = 40) a neutral viewpoint; and 1%
(n = 2) an overall negative viewpoint towards the poten-
tial impact of AI on radiology (Fig. 5).
Of the relevant posts, the majority were of media ori-
gin (86%, n = 152); peer-reviewed journals accounted for
8% (n = 14); 4% (n = 7) were from commercial websites;
and 2.3% (n = 4) were from other sources. Commercial
41
24
3 5
26
4
1
0
0
3
32
5
8 5
27
13
16
5
1
12
0
10
20
30
40
50
60
70
80
90
100
Radiologist Journalist Non-Radiologist
Doctor
Radiographers Other
Posi�ve Nega�ve Balanced Neutral
Fig. 2 Number of overall viewpoints presented by each author
group. N = 230
Page 6 of 11Mulryan et al. Insights into Imaging (2022)
13:79
webpages had the highest percentage of overall positive
viewpoints (85.7%, n = 6). This was followed by media
webpages (67%, n = 102), peer-reviewed journals (35.7%,
n = 5), and webpages that fell under the category ‘other’
(25%, n = 1). Forums, educational webpages, and blogs
composed the ‘other’ category. Peer-reviewed jour-
nals had the greatest percentage of balanced viewpoints
(21.4%, n = 3), followed by those that fell under the cat-
egory ‘other’ (25%, n = 1). One (7%) article from a peer -
reviewed journal had an overall negative viewpoint,
as did one (0.66%) of the media webpages. No negative
viewpoints were identified in the commercial category.
See Table 4 for summary.
An identifiable named author was present on 85%
(n = 151) of the relevant webpages identified by the
Google Alerts RSS feed. The majority of listed authors
were journalists (66%, n = 100). This was followed by
commercial authors (12.6%, n = 19), radiologists (4%,
n = 6), researchers 4% (n = 6), and doctors working in
other specialties 3.3% (n = 5). Other authors not falling
into the categories represented 9.9% (n = 15) of the con-
tributors. This is illustrated in Fig. 4. Marketing man-
agers, media editors, and students were amongst those
that made up the ‘other’ category. Webpages with a
commercial author had the highest percentage of over-
all positive viewpoints 84% (n = 16). This was followed
Table 2 Geographical origin of viewpoints
Number Positive Negative Neutral Balanced
Geographical origin
North America 130 60 4 18 48
Europe 49 21 2 9 17
UK 23 9 0 4 10
Other 46 19 2 7 18
46%
3%
14%
37%
43%
4%
18%
35%
39%
0%
17%
43%
41%
4%
15%
39%
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
Posi�ve Nega�ve Neutral Balanced
North America Europe UK Other
Fig. 3 Geographical origin of viewpoint (percentage)
Table 3 Geographical origin of radiologist and viewpoint
Origin Number Positive Negative Neutral Balanced
North America 42 19 2 3 18
Europe 31 14 2 3 12
UK 4 2 0 0 2
Other 12 3 1 3 5
Page 7 of 11Mulryan et al. Insights into Imaging (2022)
13:79
by webpages authored by journalists 64% (n = 64); non-
radiologist doctors 60% (n = 3); ‘other’ authors 53%
(n = 8); and radiologists 50% (n = 3). Researchers had
the greatest percentage of balanced viewpoints 67%
(n = 4), while radiologists had the greatest percent-
age of neutral viewpoints 33% (n = 2). One webpage
authored by a journalist (1%) and one authored by an
author in the ‘other’ category (7%) had overall negative
viewpoints. This data summarized and tabulated can be
seen in Additional file 1: Appendix 4.2 (Fig. 6).
45%
5% 7%
43%45%
6%
10%
39%
50%
0% 0%
50%
25%
8%
25%
42%
0%
10%
20%
30%
40%
50%
60%
Posi�ve Nega�ve Neutral Balanced
Percentage
North America Europe UK Other
Fig. 4 Geographical origin and radiologist viewpoint percentage
Total Data Points
(n=5,504)
Posi
ve (n=115) Balanced (n=20) Neutral (n=40) Nega
ve (n=2)
Non Relevant (n=5,069)
Duplicates (n=258)
Relevant & Accessible (n=177)
Fig. 5 Schematic of live Google Alert RSS feed and results
summary
Table 4 Summary of categorization of posts by origin w ith
percentage
n = 177 Journal Media Commercial Other
14 7.91% 152 85.88% 7 3.95% 4 2.26%
Positive 5 35.71% 102 67.11% 6 85.71% 1 25.00%
Negative 1 7.14% 1 0.66% 0 0.00% 1 25.00%
Balanced 3 21.43% 13 8.55% 0 0.00% 1 25.00%
Neutral 5 35.71% 36 23.68% 1 14.29% 1 25.00%
Page 8 of 11Mulryan et al. Insights into Imaging (2022)
13:79
Discussion
Opinions and forecasts concerning the role and impact of
AI on medical imaging have exploded in last number of
years primarily due to recent advancements in AI prod-
ucts for radiology. These viewpoints can be positive, neg-
ative, balanced, or neutral in their content. AI in medical
imaging was first mentioned in the literature in the 1950’s
and has evolved substantially since the early 2000’s with
the advent of machine learning (ML) and deep learning
(DL) algorithms [19]. The number of AI exhibitors at
the annual meeting of the Radiological Society of North
America (RSNA) and the European Congress of Radiol-
ogy (ECR) has tripled from 2017 to 2019 [20, 21]. Since
2016, the US Food and Drugs administration (FDA) has
approved 64 AI ML-based medical imaging technologies
with 21 of these specializing in the field of Radiology [22].
In Europe, 240 AI/ML devices have been approved over
the 2015–2020 period by the Conformité Européene (CE)
with 53% for use in radiology [23]. In 2019, The European
Society of Radiology published a white paper to provide
the radiology community with information on AI and
a further study by the ESR demonstrated that there is a
demand amongst the radiological community to inte-
grate AI education into radiology curricula and training
programs including issues related to ethics legislation
and data management [24]. The aim of the present paper
was use internet activity to determine current opinion on
whether AI is a threat or opportunity to the field as this
will have impact on recruitment and resource allocation
to radiology.
64
3
16
1 3
8
1
1
8
1
1
4 2
2
27
2
2
1
4
0
20
40
60
80
100
120
Journalist Radiologist Commercial Researcher Non-radiologist
doctor
Other
Posi�ve Nega�ve Balanced Neutral
Fig. 6 Number of overall viewpoints presented by each author
group. N = 151
Page 9 of 11Mulryan et al. Insights into Imaging (2022)
13:79
We observed that a wide diversity of commentators
were engaged dialog pertaining to AI in radiology ranging
from those with professional and academic backgrounds
to those with individual and organizational interests.
While these authors predictably included healthcare
professionals, there was also a significant representation
from those with media and commercial backgrounds.
Opinions on AI in radiology were therefore gathered
from authors with a wide variety of occupations and
backgrounds including radiologists, non-radiology physi-
cians, journalists, researchers, radiographers, commercial
managers, physicists, lawyers, computer scientist, data
officers, engineers’, students, and pharmacists. There was
a relatively equal division of authorship between North
America and Europe. This distribution was also dem-
onstrated among radiologist authored pages included in
this study. This professional and geographic diversity of
authors provides a more complete and international sam-
ple of opinions on the impact of AI on radiology.
Radiologists repeatedly expressed the opinion that
inclusion of AI algorithms could help with labour inten-
sive tasks, improve efficiency and workflow. They also
opined against the potential of AI replacing radiologists.
Numerous studies in the literature also argued against AI
replacing radiologists [25, 26]. An example of two com-
ments made by radiologists included:
The higher efficiency provided by AI will allow radi-
ologists to perform more value-added tasks, becom-
ing more visible to patients and playing a vital role
in multidisciplinary clinical teams
And
Radiologists, the physicians who were on the fore-
front of the digital era in medicine, can now guide
the introduction of AI in healthcare - The time to
work for and with AI in radiology is now
Radiographers expressed the opinion that utilizing AI
algorithms could:
ultimately lead to a reduction in the radiation expo-
sure while maintaining the high quality of medical
images
and that radiographers would be vital in building qual-
ity imaging biobanks for AI data bases. Interestingly,
radiographers also wrote that AI should be integrated
into the medical radiation practice curriculum and there
should be more emphasis on radiomics. Furthermore,
radiographers expressed the belief that emotional intel -
ligence not artificial intelligence is the cornerstone of all
patient care and while the concept of ‘will a robot take my
job’ may be a hot topic, they believe that patient’s will not
accept their radiographs being taken by a robotic device.
This study identified a total of ten negative viewpoints
which included comments from radiologists—5, a law-
yer—1, a journalist—1 and a neuroscience Ph.D. stu-
dent—1. Examples include:
In the long-term future, I think that computers will
take over the work of image interpretation from
humans, just as computers or machines have taken
over so many tasks in our lives. The question is, how
quickly will this happen?
And
Radiologists know that supporting research into AI
and advocating for its adoption in clinical settings
could diminish their employment opportunities and
reduce respect for their profession. This provides an
incentive to oppose AI in various ways
And
An artificially intelligent computer program can
now diagnose skin cancer more accurately than a
board-certified dermatologist and better yet, the
program can do it faster and more efficiently
And
A.I. is replacing doctors in fields such as interpreting
X-rays and scans, performing diagnoses of patients’
symptoms, in what can be described as a ‘consulting
physician’ basis
A recent editorial in the Radiological Society of North
America (RSNA) highlighted a number of high-profile
negative viewpoints made a number of years ago relating
to the impact of AI on radiologists [27]. This included an
AI pioneer who was recently awarded the Association for
Computing Machinery Turing Award, “We should stop
training radiologists now” [27]. Secondly a venture cap-
italist, Vinod Khsla proclaimed in 2017 ‘that the role of
the radiologist will be obsolete in 5 years’ and replaced
with ‘sophisticated algorithms’ [28] and furthermore an
American ‘Affordable Care’ architect remarked at the
2016 American College of Radiology Annual meeting
that radiologists will be replaced by computer technology
in 4–5 years [29] and that ‘in a few years there may be no
specialty called radiology’[30].
Interestingly, many of the opinions regarding time-
frames during which AI were predicted to replace radi-
ologists have already expired with a relatively minor
uptake of AI in imaging interpretation and without signs
of AI replacing radiologists at present. These controver -
sial viewpoints have potential to grab headlines but are
not without potential for negative impact on the future
of radiology and particularly on recruitment of future
radiologists, given that studies have shown that medical
Page 10 of 11Mulryan et al. Insights into Imaging (2022)
13:79
students are less likely to consider pursuing a career in
radiology because of the apparent threat of AI to the spe-
cialty [8–10, 31].
This study found that the overwhelming majority of
web pages assessed had favourable viewpoints with very
few negative viewpoints identified. This finding is con-
sistent with a recent social media-based study showing
that discussions around AI and radiology were astound-
ingly positive, with an increasing frequency of positive
discussions identified over a 1-year period [32]. Taken
together, these findings suggest a shift in opinion from a
once negative view to a more positive one.
Of the webpages identified using the Google search
engine, Radiologists were found to be the most common
author group, making up 38.5% of all identifiable authors.
These webpages were predominantly peer-reviewed jour-
nal papers and media articles. These findings highlight
that radiologists are actively involved in both AI-related
research and online discussions relating to AI and the field
of radiology. Radiologists have been encouraged to play
an active role in the development of applications of AI in
medical imaging to ensure appropriate implementation
and validation of AI in clinical practice [26, 33]. In 2017,
The American College of Radiology established The Data
Science Institute partly with this purpose in mind [34].
The main limitation of this study was the use of sub-
jective assessment to qualify information into positive,
negative, neutral, and balanced. This introduces potential
for observer bias in determining the overall viewpoint of
posts, but it was attempted to minimize this by using two
senior radiologists as the assessors. We did not quantita-
tively assess readability of posts. We only used one search
engine ‘Google’ and limited to just the English language
and to the first two pages of each search term, a strategy
following previous publications and backed by behavio-
ral studies which have indicated that 95% of users choose
websites listed on the first page of results, leaving only 5%
reviewing results on any subsequent pages.
We acknowledge that the list of search terms in Addi-
tional file 1: Appendix 1 is not exhaustive and is just a
representative sample of the actual terms that may be
used when searching for AI in medical imaging but by
using a broad range of terms and studying the first two
pages of findings that these search results would yield the
most relevant information. The RSS feed was used as a
surrogate for incident information and may not be wholly
representative of information found in social media news
feeds, Twitter, and other sites. There is also potential that
a single 3-week alert period may be biased by news and
media events that occurred during that time.
In Conclusion, authors of 43% of all pages evalu-
ated expressed the overall opinion that AI would have
a positive impact on the radiologist and the radiology
department; 38.3% presented a balanced viewpoint;
15.3% presented a neutral viewpoint; and 3.2% pre-
sented a negative viewpoint. We have demonstrated
that the overall view presented online is a positive one
that AI will benefit the specialty. We should be excited
and look forward to advancements in this technol-
ogy which has the potential to improve accuracy of
diagnosis in diagnostic radiology, reduce errors and
improve efficiency in dealing with rapidly increasing
workloads.
Abbreviations
AI: Artificial intelligence; CAGR : Compound annual growth
rate; CAD:
Computer-aided diagnostics; DL: Deep learning; ECR:
European congress of
radiology; FDA: Food and drug agency; ML: Machine learning;
RSS: Rich site
summary; RSNA: Radiological society of North America.
Supplementary Information
The online version contains supplementary material available at
https:// doi.
org/ 10. 1186/ s13244- 022- 01209-4.
Additional file 1: 34 key search phrases used in both static
Google search
and Rich Site Summary feed search strategy.
Authors’ contributions
PM and NNC had full access to all of the data in the study and
take responsi-
bility for the integrity of the data and accuracy of the data
analysis. Concept
and design: PM, NNC, DR, CC, PMcL, MMcE, OJO’C, MM.
Acquisition, analysis,
or interpretation of data: PM, NNC, CC, DR, MM, OJO’C,
ATO’M. Drafting of the
manuscript: PM, NNC, OJO’C, MM, ATO’M. Critical revision
of the manuscript
for important intellectual content: PM, NNC, ATO’M, CC,
MMcE, MM, OJO’C.
Administrative, technical, or material support: ATO’M.
Supervision: PMcL, MM,
OJO’C. All authors read and approved the final manuscript.
Funding
No sources of funding were sought or required to carry out this
study.
Availability of data and materials
The datasets used and/or analyzed during the current study are
available from
the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
Ethical approval was granted by the institution review board:
Clinical Research
Ethics Committee of the Cork teaching Hospitals.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Author details
1 Cork University Hospital/Mercy University Hospital, Cork,
Ireland. 2 University
College Cork, Cork, Ireland. 3 Cork University Hospital, Cork,
Ireland. 4 South
Infirmary Victoria University Hospital, Cork, Ireland.
Received: 12 November 2021 Accepted: 12 March 2022
https://doi.org/10.1186/s13244-022-01209-4
https://doi.org/10.1186/s13244-022-01209-4
Page 11 of 11Mulryan et al. Insights into Imaging (2022)
13:79
References
1. Goodfellow I, Bengio Y, Courville A (2016) Deep learning.
The MIT Press,
Cambridge
2. Pesapane F, Codari M, Sardanelli F (2018) Artificial
intelligence in medical
imaging: threat or opportunity? Radiologists again at the
forefront of
innovation in medicine. Eur Radiol Exp 2:35. https:// doi. org/
10. 1186/
s41747- 018- 0061-6
3. Esteva A, Kuprel B, Novoa RA et al (2017) Dermatologist-
level classifica-
tion of skin cancer with deep neural networks. Nature
542(7639):115–118.
https:// doi. org/ 10. 1038/ natur e21056
4. McDonald RJ, Schwartz KM, Eckel LJ et al (2015) The
effects of changes in
utilization and technological advancements of cross-sectional
imaging
on radiologist workload. Acad Radiol 22(9):1191–1198. https://
doi. org/ 10.
1016/j. acra. 2015. 05. 007
5. Wood L (2021) The worldwide diagnostic imaging industry
is expected to
reach $48.5 Billion by 2027. BusinessWire A Berkshire
Hathaway Company.
ResearchandMarkets.com. https:// www. busin esswi re. com/
news/ home/
20211 20900 5945/ en/ The- World wide- Diagn ostic- Imagi
ng- Indus try- is-
Expec ted- to- Reach- 48.5- Billi on- by- 2027--- Resea rchAn
dMark ets. com#:
~: text= Amid% 20the% 20COV ID% 2D19% 20cri sis,the%
20ana lysis% 20per
iod% 202020% 2D2027.
6. Shanafelt TD, Gradishar WJ, Kosty M et al (2014) Burnout
and career
satisfaction among US oncologists. J Clin Oncol 32(7):678–686.
https://
doi. org/ 10. 1200/ JCO. 2013. 51. 8480
7. Shanafelt TD, Balch CM, Bechamps GJ et al (2009) Burnout
and career sat-
isfaction among American surgeons. Ann Surg 250(3):463–471.
https://
doi. org/ 10. 1097/ SLA. 0b013 e3181 ac4dfd
8. PintoDosSantos D, Giese D, Brodehl S et al (2019) Medical
students’
attitude towards artificial intelligence: a multicentre survey. Eur
Radiol
29(4):1640–1646. https:// doi. org/ 10. 1007/ s00330- 018-
5601-1
9. Gong B, Nugent JP, Guest W et al (2019) Influence of
artificial intelligence
on Canadian medical students’ preference for radiology
specialty: a
national survey study. Acad Radiol 26(4):566–577
10. Sit C, Srinivasan R, Amlani A et al (2020) Attitudes and
perceptions of
UK medical students towards artificial intelligence and
radiology: a
multicentre survey. Insights Imaging 11:14. https:// doi. org/ 10.
1186/
s13244- 019- 0830-7
11. Waymel Q, Badr S, Demondion X, Cotten A, Jacques T
(2019) Impact of
the rise of artificial intelligence in radiology: what do
radiologists think?
Diagn Interv Imaging 100(6):327–336. https:// doi. org/ 10.
1016/j. diii. 2019.
03. 015
12. Chen JY, Heller MT (2014) How competitive is the match
for radiology
residency? Present view and historical perspective. J Am Coll
Radiol
11(5):501–506. https:// doi. org/ 10. 1016/j. jacr. 2013. 11. 011
13. GlobalStats (2021) Search engine market share south
worldwide 2021–
2022. https:// gs. statc ounter. com/ search- engine- market-
share. Accessed
02 Jan 2021
14. Lorigo L, Pan B, Hembrooke H, Joachims T, Granka L, Gay
G (2006) The
influence of task and gender on search and evaluation behavior
using
Google. Inf Process Manag 42(4):1123–1131
15. Spink A, Jansen BJ, Blakely C, Koshman S (2006) A study
of results overlap
and uniqueness among major web search engines. Inf Process
Manag
42(5):1379–1391
16. Enge E, Spencer S, Stricchiola J, Fishkin R (2012) The art
of SEO: mastering
search engine optimization, 2nd edn. O’Reilly Media,
Sebastopol
17. Hopkins L (2012) Online reputation management: why the
first page of
Google matters so much. www. leeho pkins. net/ 2012/ 08/ 30/
online- reput
ation- manag ement- why- the- first- page- of- google- matte rs-
so- much/.
Accessed 06 Feb 2021
18. Chuklin A, Serdyukov P, De Rijke M (2013) Modeling
clicks beyond the
first result page. In: Proceedings of international conference on
informa-
tion and knowledge management, pp 1217–1220
19. Kaul V, Enslin S, Gross SA (2020) History of artificial
intelligence in medi-
cine. Gastrointest Endosc 92(4):807–812
20. Radiological Society of North America (2017) AI
exhibitors RSNA 2017.
Radiological society of North America. http:// rsna2 017. rsna.
org/ exhib
itor/? action= add& filter= Misc& value= Machi ne- Learn ing.
Accessed
21. Radiological Society of North America (2019) AI
exhibitors RSNA 2019.
Radiological society of North America.
https://rsna2019.mapyourshow.
com/8_0/explore/pavilions.cfm#/show/cat-
pavilion|AI%20Showcase.
Accessed
22. Benjamens S, Dhunnoo P, Meskó B (2020) The state of
artificial
intelligence-based FDA-approved medical devices and
algorithms:
an online database. NPJ Digit Med. 3:118. https:// doi. org/ 10.
1038/
s41746- 020- 00324-0
23. Muehlematter UJ, Daniore P, Vokinger KN (2021)
Approval of artificial
intelligence and machine learning-based medical devices in the
USA and
Europe (2015–20): a comparative analysis. Lancet Digit Health
3(3):e195–
e203. https:// doi. org/ 10. 1016/ S2589- 7500(20) 30292-2
24. Codari M, Melazzini L, Morozov SP et al (2019) Impact of
artificial intelli-
gence on radiology: a EuroAIM survey among members of the
European
Society of Radiology. Insights Imaging 10:105. https:// doi. org/
10. 1186/
s13244- 019- 0798-3
25. Recht M, Bryan RN (2017) Artificial intelligence: threat or
boon to radiolo-
gists? J Am Coll Radiol 14:1476–1480
26. King BF (2018) Artificial intelligence and radiology: what
will the future
hold? J Am Coll Radiol 15(3, Part B):501–503
27. Langlotz CP (2019) Will artificial intelligence replace
radiologists? Radiol
Artif Intell. 1(3):e190058
28. Farr C (2020) Here’s why one tech investor thinks some
doctors will be
‘obsolete’ in five years. CNBC 2017. https:// www. cnbc. com/
2017/ 04/ 07/
vinod- khosla- radio logis ts- obsol ete- five- years. html.
Accessed 4 Feb 2020
29. Siegel E (2020) Will radiologists be replaced by
computers? Debunking
the hype of AI. Carestream 2016. https:// www. cares tream.
com/ blog/
2016/ 11/ 01/ debat ing- radio logis ts- repla ced- by- compu
ters/. Accessed 4
Feb 2020
30. Chockley K, Emanuel E (2016) The end of radiology?
Three threats to the
future practice of radiology. J Am Coll Radiol 13:1415–1420.
https:// doi.
org/ 10. 1016/j. jacr. 2016. 07. 010
31. Bin Dahmash A, Alabdulkareem M, Alfutais A et al (2020)
Artificial intel-
ligence in radiology: does it impact medical students preference
for
radiology as their future career? BJR Open 2(1):20200037
32. Goldberg JE, Rosenkrantz AB (2019) Artificial intelligence
and radiology: a
social media perspective. Curr Probl Diagn Radiol 48(4):308–
311
33. Dreyer K, Allen B (2018) Artificial intelligence in health
care: brave new
world or golden opportunity? J Am Coll Radiol 15(4):655–657
34. McGinty GB, Allen B (2018) The ACR data science
institute and AI advisory
group: harnessing the power of artificial intelligence to improve
patient
care. J Am Coll Radiol 15(3, Part B):577–579
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional
claims in pub-
lished maps and institutional affiliations.
https://doi.org/10.1186/s41747-018-0061-6
https://doi.org/10.1186/s41747-018-0061-6
https://doi.org/10.1038/nature21056
https://doi.org/10.1016/j.acra.2015.05.007
https://doi.org/10.1016/j.acra.2015.05.007
https://www.businesswire.com/news/home/20211209005945/en/
The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-
Reach-48.5-Billion-by-2027---
ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2
D19%20crisis,the%20analysis%20period%202020%2D2027
https://www.businesswire.com/news/home/20211209005945/en/
The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-
Reach-48.5-Billion-by-2027---
ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2
D19%20crisis,the%20analysis%20period%202020%2D2027
https://www.businesswire.com/news/home/20211209005945/en/
The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-
Reach-48.5-Billion-by-2027---
ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2
D19%20crisis,the%20analysis%20period%202020%2D2027
https://www.businesswire.com/news/home/20211209005945/en/
The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-
Reach-48.5-Billion-by-2027---
ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2
D19%20crisis,the%20analysis%20period%202020%2D2027
https://www.businesswire.com/news/home/20211209005945/en/
The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-
Reach-48.5-Billion-by-2027---
ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2
D19%20crisis,the%20analysis%20period%202020%2D2027
https://doi.org/10.1200/JCO.2013.51.8480
https://doi.org/10.1200/JCO.2013.51.8480
https://doi.org/10.1097/SLA.0b013e3181ac4dfd
https://doi.org/10.1097/SLA.0b013e3181ac4dfd
https://doi.org/10.1007/s00330-018-5601-1
https://doi.org/10.1186/s13244-019-0830-7
https://doi.org/10.1186/s13244-019-0830-7
https://doi.org/10.1016/j.diii.2019.03.015
https://doi.org/10.1016/j.diii.2019.03.015
https://doi.org/10.1016/j.jacr.2013.11.011
https://gs.statcounter.com/search-engine-market-share
http://www.leehopkins.net/2012/08/30/online-reputation-
management-why-the-first-page-of-google-matters-so-much/
http://www.leehopkins.net/2012/08/30/online-reputation-
management-why-the-first-page-of-google-matters-so-much/
http://rsna2017.rsna.org/exhibitor/?action=add&filter=Misc&val
ue=Machine-Learning
http://rsna2017.rsna.org/exhibitor/?action=add&filter=Misc&val
ue=Machine-Learning
https://doi.org/10.1038/s41746-020-00324-0
https://doi.org/10.1038/s41746-020-00324-0
https://doi.org/10.1016/S2589-7500(20)30292-2
https://doi.org/10.1186/s13244-019-0798-3
https://doi.org/10.1186/s13244-019-0798-3
https://www.cnbc.com/2017/04/07/vinod-khosla-radiologists-
obsolete-five-years.html
https://www.cnbc.com/2017/04/07/vinod-khosla-radiologists-
obsolete-five-years.html
https://www.carestream.com/blog/2016/11/01/debating-
radiologists-replaced-by-computers/
https://www.carestream.com/blog/2016/11/01/debating-
radiologists-replaced-by-computers/
https://doi.org/10.1016/j.jacr.2016.07.010
https://doi.org/10.1016/j.jacr.2016.07.010
© The Author(s) 2022. This work is published under
http://creativecommons.org/licenses/by/4.0/(the “License”).
Notwithstanding
the ProQuest Terms and Conditions, you may use this content in
accordance
with the terms of the License.
An evaluation of information online on artificial intelligence
in medical imagingAbstract Background: Methods: Results:
Conclusion: KeypointsBackgroundMethodsData collectionData
identificationExisting dataLive streamData sourcingData
categorizationRelevancePosi tiveNegativeBalanced
and neutralData analysisResultsDiscussionReferences
European Society of Radiology (ESR)
Insights into Imaging (2022) 13:107
https://doi.org/10.1186/s13244-022-01247-y
S TAT E M E N T
Current practical experience with artificial
intelligence in clinical radiology: a survey
of the European Society of Radiology
European Society of Radiology (ESR)*
Abstract
A survey among the members of European Society of Radiology
(ESR) was conducted regarding the current practi-
cal clinical experience of radiologists with Artificial
Intelligence (AI)-powered tools. 690 radiologists completed the
survey. Among these were 276 radiologists from 229
institutions in 32 countries who had practical clinical
experience
with an AI-based algorithm and formed the basis of this study.
The respondents with clinical AI experience included
143 radiologists (52%) from academic institutions, 102
radiologists (37%) from regional hospitals, and 31 radiologists
(11%) from private practice. The use case scenarios of the AI
algorithm were mainly related to diagnostic interpreta-
tion, image post-processing, and prioritisation of workflow.
Technical difficulties with integration of AI-based tools into
the workflow were experienced by only 49 respondents (17.8%).
Of 185 radiologists who used AI-based algorithms
for diagnostic purposes, 140 (75.7%) considered the results of
the algorithms generally reliable. The use of a diagnos-
tic algorithm was mentioned in the report by 64 respondents
(34.6%) and disclosed to patients by 32 (17.3%). Only
42 (22.7%) experienced a significant reduction of their
workload, whereas 129 (69.8%) found that there was no such
effect. Of 111 respondents who used AI-based algorithms for
clinical workflow prioritisation, 26 (23.4%) considered
algorithms to be very helpful for reducing the workload of the
medical staff whereas the others found them only
moderately helpful (62.2%) or not helpful at all (14.4%). Only
92 (13.3%) of the total 690 respondents indicated that
they had intentions to acquire AI tools. In summary, although
the assistance of AI algorithms was found to be reliable
for different use case scenarios, the majority of radiologists
experienced no reduction of practical clinical workload.
Keywords: Professional issues, Artificial intelligence in
imaging, Artificial intelligence and workload, Artificial
intelligence in radiology
© The Author(s) 2022. Open Access This article is licensed
under a Creative Commons Attribution 4.0 International
License, which
permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit
to the
original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The
images or
other third party material in this article are included in the
article’s Creative Commons licence, unless indicated otherwise
in a credit line
to the material. If material is not included in the article’s
Creative Commons licence and your intended use is not
permitted by statutory
regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder. To view a copy
of this
licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Key points
• Artificial Intelligence (AI) algorithms are being used
for a large spectrum of use case scenarios in clinical
radiology in Europe, including assistance with inter-
pretive tasks, image post-processing, and prioritisa-
tion in the workflow.
• Most users considered AI algorithms generally reli-
able and experienced no major problems with techni-
cal integration in their daily practice.
• Only a minority of users experienced a reduction of
the workload of the radiological medical staff due to
the AI algorithms.
Background and objectives
Digital imaging is naturally predisposed to benefit from
the rapid and exciting progress in data science. The
increase of imaging examinations and the associated
diagnostic data volume have resulted in a mismatch
Open Access
Insights into Imaging
*Correspondence: [email protected]
European Society of Radiology (ESR), Am Gestade 1, 1010
Vienna, Austria
http://creativecommons.org/licenses/by/4.0/
http://crossmark.crossref.org/dialog/?doi=10.1186/s13244-022-
01247-y&domain=pdf
Page 2 of 9European Society of Radiology (ESR) Insights into
Imaging (2022) 13:107
between the radiologic workforce and workload in many
European countries. In an opinion survey conducted in
2018 among the members of the European Society of
Radiology (ESR), many respondents had expectations
that algorithms based on artificial intelligence (AI) and
particularly machine learning could reduce radiologists’
workload [1]. Although a growing number of AI-based
algorithms has become available for many radiological
use case scenarios, most published studies indicate that
only very few of these tools are helpful for reducing radi -
ologists’ workload, whereas the majority rather result in
an increased or unchanged workload [2]. Furthermore,
in a recent analysis of the literature it was found that the
available scientific evidence of the clinical efficacy of 100
commercially available CE-marked products was quite
limited, leading to the conclusion that AI in radiology
was still in its infancy [3]. The purpose of the present sur-
vey was to get an impression of the current practical clin-
ical experience of radiologists from different European
countries with AI-powered tools.
Methods
A survey was created by the members of the ESR eHealth
and Informatics Subcommittee and was intentionally
kept brief to allow responding in a few minutes. A few
demographic questions included the country, type of
institution (i.e. academic department, regional hospital,
or private practice), and the main field of radiological
practice as summarised in Tables 1, 2 and 3. For the more
specific questions about the use of AI-based algorithms
it was clearly stated that the answers were intended
to reflect experience from clinical routine rather than
research and testing purposes. The questions related to
the use of AI addressed the respondents’ working expe-
rience with certified AI-based algorithms, possible diffi-
culties in integrating these algorithms in the IT system,
and different use case scenarios for which AI-based algo-
rithms were used in clinical routine, mainly distinguish-
ing between tools aiming at facilitating the diagnostic
interpretation process itself (questions shown in Fig. 1)
from those that were aiming at facilitating the prioritisa-
tion of examinations in the workflow. Specific questions
addressed the technical integration of the algorithms
(question mentioned in Table 4); radiologists’ confidence
in the diagnostic performance (question mentioned in
Table 5); quality control mechanisms to evaluate diag-
nostic accuracy (questions mentioned in Tables 6, 7 and
8); communication of the use of diagnosis-related algo-
rithms towards patients or in the radiology reports (ques-
tions mentioned in Tables 9 and 10); and the usefulness of
algorithms for reducing the radiologists’ workload (ques-
tions mentioned in Tables 11 and 12). Respondents also
had the opportunity to offer free text remarks regarding
their use of AI-based tools. Those respondents who did
not use AI-based algorithms for the purpose of clinical
practice were asked to skip all the questions related to
clinical AI-use and to proceed directly to the last ques-
tion about acquisition of AI-based algorithms, so that the
opinions of all participating radiologists were taken into
consideration for the final questions about their inten-
tions regarding acquisition of such tools (question men-
tioned in Fig. 2).
The survey was created through the ESR central office
using the “Survey Monkey platform” (SurveyMonkey
Inc., San Mateo, CA, USA) and 27,700 radiologist mem-
bers of the ESR were invited by e-mail to participate
in January 2022. The survey was closed after a second
reminder in March 2022. The answers of the respond-
ents were collected and analysed using an Excel software
(Microsoft, Redmond, WA, USA).
Results
A total of 690 ESR radiologist members from 44 coun-
tries responded to the survey, for a response rate of
2.5%. The distribution per country and the proportion of
respondents with practical clinical experience with AI-
based algorithms per country are given in Table 1.
The 276 respondents with practical clinical experi-
ence with AI-based algorithms were affiliated to 229
institutions in 32 countries; their answers formed the
main basis of this study. Table 2 shows that 143 (52%) of
the respondents with practical clinical experience with
AI algorithms were affiliated to academic institutions,
whereas 102 (37%) worked in regional hospitals, and 31
(11%) in private practice.
Table 3 characterises the same group of respondents
as in Table 2 regarding their main field of activity show-
ing that a wide range of subspecialties was represented
in the survey and that abdominal radiology, neuroradiol-
ogy, general radiology, and emergency radiology together
accounted for half of the respondents. A detailed analysis
of the results according to subspecialties was beyond the
scope of the study because of the relatively small number
of resulting groups.
The experience regarding technical integration of
the software algorithms into the IT system or work-
flow is summarised in Table 4, showing that only 17.8%
of respondents reported difficulties with integration of
these tools, whereas a majority of 44.5% observed no
such difficulties, although 37.7% of respondents did not
answer this question.
Algorithms were used in clinical practice either for
assistance in interpretation or for prioritisation of work-
flow. An overview of the scenarios for which AI- powered
algorithms were the used by the respondents is given in
Fig. 1.
Page 3 of 9European Society of Radiology (ESR) Insights into
Imaging (2022) 13:107
Table 1 Distribution of all 690 respondents by countries and
proportion of radiologists with practical clinical experience
with AI
algorithms
Country Number of respondents
per country
Number of respondents with practical
clinical experience with AI per country
Percentage of radiologists with practical
clinical experience in AI per country (%)
Italy 71 23 32
Spain 64 19 30
UK 60 23 38
Germany 50 23 46
Netherlands 50 35 70
Sweden 29 14 48
Denmark 27 15 56
Turkey 27 3 11
Norway 26 12 46
Switzerland 27 14 54
France 25 12 48
Belgium 23 13 57
Austria 21 12 57
Greece 21 5 24
Portugal 17 5 29
Romania 16 4 25
Ukraine 13 3 23
Croatia 11 4 36
Russian Fed 11 4 36
Bulgaria 10 0 0
Poland 10 4 40
Finland 7 4 57
Hungary 7 3 43
Serbia 7 1 14
Slovenia 7 3 43
Slovakia 6 5 83
Ireland 5 2 40
Lithuania 5 2 40
Bos. & Herzegovina 4 0 0
Czech Republic 4 3 75
Israel 4 2 50
Latvia 4 0 0
Armenia 3 0 0
Albania 2 0 0
Azerbaijan 2 0 0
Belarus 2 0 0
Estonia 2 2 100
Georgia 2 0 0
Kazakhstan 2 0 0
Luxembourg 2 1 50
Cyprus 1 0 0
Iceland 1 0 0
Kosovo 1 1 100
Uzbekistan 1 0 0
Total 690 276
Page 4 of 9European Society of Radiology (ESR) Insights into
Imaging (2022) 13:107
Use of algorithms for assistance in diagnostic
interpretation
Among the 276 respondents who shared their practi-
cal experience with AI-based tool experience, a total
of 185 (67%) reported clinical experience with one or
more integrated algorithms for routine diagnostic
tasks. As seen in Fig. 1 there were different use case
scenarios, the commonest being detection or mark-
ing of specific findings. The free text remarks of the
respondents showed a large range of pathologies in
practically all clinical fields and with almost all imag-
ing modalities. Typical examples of pathologies were
pulmonary emboli and parenchymal nodules, cerebral
haemorrhage and reduced cerebrovascular blood flow,
or colonic polyps on CT. Other tasks included the
detection of traumatic lesions, e.g. the presence of bone
fractures on conventional radiographs or the calcula-
tion of bone age. The second most common diagnostic
scenario was assistance with post-processing (e.g. using
AI-based tools for image reconstruction or quantita-
tive evaluation of structural or functional abnormali-
ties), followed by primary interpretation (i.e. potentially
replacing the radiologist), assistance with differential
Table 2 Respondents with practical clinical experience with AI-
based algorithms: distribution of origin by countries and type of
institutions
Country Number of
respondents per
country
Number of
institutions per
country
Respondents from
academic departments
Respondents from
private practice
Respondents from
regional hospitals
Netherlands 35 20 16 0 19
Germany 23 21 14 3 6
Italy 23 21 13 0 10
UK 23 22 7 2 14
Spain 19 16 14 1 4
Denmark 15 7 11 1 3
Switzerland 14 13 6 6 2
Sweden 14 14 7 1 6
Belgium 13 9 5 1 7
Austria 12 11 7 1 4
France 12 11 5 5 2
Norway 12 9 6 0 6
Greece 5 5 2 2 1
Portugal 5 4 0 4 1
Slovakia 5 5 2 2 1
Croatia 4 4 1 1 2
Finland 4 3 3 0 1
Poland 4 3 3 0 1
Romania 4 2 2 0 2
Russian Fed 4 4 3 0 1
Czech Republic 3 3 1 0 2
Hungary 3 3 2 0 1
Slovenia 3 3 2 0 1
Turkey 3 3 3 0 0
Ukraine 3 2 2 1 0
Estonia 2 2 1 0 1
Ireland 2 2 1 0 1
Israel 2 2 2 0 0
Lithuania 2 2 0 0 2
Kosovo 1 1 1 0 0
Luxembourg 1 1 0 0 1
Serbia 1 1 1 0 0
Total 276 229 143 (52%) 31 (11%) 102 (37%)
Page 5 of 9European Society of Radiology (ESR) Insights into
Imaging (2022) 13:107
diagnosis, e.g. by facilitation of literature search, and
quality control.
Although a detailed analysis of all different diagnostic
use case scenarios was beyond the scope of this survey,
the respondents’ answers to specific survey questions
are shown in Tables 5, 6, 7, 8, 9, 10 and 11. Because some
respondents skipped or incompletely answered some
questions, the number of yes/no answers per question
was not complete. As shown in Table 5, most respond-
ents (75.7%) found the results provided by the algorithms
generally reliable.
A significant number of respondents declared that
they used mechanisms of quality assurance regarding
the diagnostic performance of the algorithms. These
included keeping records of diagnostic discrepancies
between the radiologist and the algorithms in 44.4%,
establishing receiver-operator characteristic (ROC)
curves of diagnostic accuracy based on the radiologist’s
diagnosis (34.1%) and/ or ROC curves based on the final
medical record (30.3%) (Tables 6, 7 and 8).
The use of a diagnostic algorithm was disclosed to
patients by 17.3% of the respondents but mentioned in
the report by 34.6% (Tables 9 and 10).
Only a minority of 22.7% of respondents who used AI-
based algorithms for diagnostic purposes experienced
a reduction of their workload, whereas 69.8% reported
Table 3 Respondents with practical clinical experience with
AI-based algorithms: main field of activity/subspecialty
Field of practice Number of respondents (%)
Abdominal radiology 45 16.3
Neuroradiology 45 16.3
General radiology 39 14.1
Chest radiology 32 11.6
Cardiovascular radiology 24 8.7
Musculoskeletal radiology 23 8.3
Oncologic imaging 23 8.3
Breast radiology 17 6.2
Emergency radiology 10 3.6
Paediatric radiology 8 2.9
Urogenital radiology 6 2.2
Head and Neck radiology 4 1.5
Total 276 100
111 (40%)
11 (4%)
14 (5%)
19 (7%)
79 (28.6%)
142 (51.5%)
0 50 100 150
Workflow Priori sa on
Quality control
Assistance during interpreta on (e.g., access to literature,
facilita ng differen al diagnosis etc.)
Primary interpreta on (=replacing the radiologist)
Assistance for post-processing (e.g., image reconstruc on,
quan ta ve evalua on)
Assistance during interpreta on (e.g., detec ng / marking of
specific findings like nodules, emboli etc.)
Fig. 1 Which type of scenario (use case) was addressed by the
used AI algorithm(s) in clinical routine? The answers of all 276
respondents
with practical clinical AI experience are shown, including the
number of respondents using one or more algorithms for
assistance in diagnostic
interpretation (green) and/ or workflow prioritisation (blue)
Table 4 Respondents with practical clinical experience with
AI-based algorithms: Have there been any major problems with
integration of AI-based algorithms into your IT
system/workflow?
Answer Number of respondents (%)
Yes 49 17.8
No 123 44.5
Skipped 104 37.7
Total 276 100
Table 5 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Were the
findings of the algorithm(s) considered to be reliable?
Answer Number of respondents (%)
Yes 140 75.7
No 31 16.8
Skipped 14 7.5
Total 185 100
Page 6 of 9European Society of Radiology (ESR) Insights into
Imaging (2022) 13:107
that there was no reduction effect on their workload
(Table 11).
Use of algorithms for prioritisation of workflow
Among the 276 respondents who had practical expe-
rience with AI-based tools, there were 111 respond-
ents (40%) reporting experience with algorithms for
prioritisation of image sets in their clinical workflow.
As shown in Table 12, the prioritisation algorithms
were considered to be very helpful for reducing the
workload of the medical staff by 23.4% respondents
who used them, whereas the other users found them
only moderately helpful (62.2%) or not helpful at all
(14.4%).
Intentions of all respondents regarding the acquisition
of an AI‑ based algorithm
All participants of the survey, regardless of their prac-
tical clinical experience, were given the opportunity to
answer the question whether they intended to acquire
a certified AI- based software. Of the 690 participants,
92 (13.3%) answered “yes”, 363 (52.6%) answered “no,”
and 235 (34.1%) did not answer this question. Figure 2
Table 6 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Were
discrepancies between the software and the radiologist
recorded?
Answer Number of respondents (%)
Yes 82 44.4
No 89 48.1
Skipped 14 7.5
Total 185 100
Table 7 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Was the
diagnostic accuracy (ROC curves) supervised on a regular basis
in
comparison with the radiologist’s diagnosis?
Answer Number of respondents (%)
Yes 63 34.1
No 108 58.4
Skipped 14 7.5
Total 185 100
Table 8 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Was the
diagnostic accuracy (ROC curves) supervised on a regular basis
in
comparison with the final diagnosis in the medical record?
Answer Number of respondents (%)
Yes 56 30.3
No 115 62.2
Skipped 14 7.5
Total 185 100
Table 9 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Were
patients
informed that an AI software was used to reach the diagnosis?
Answer Number of respondents (%)
Yes 32 17.3
No 139 75.2
Skipped 14 7.5
Total 185 100
Table 10 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Was the
use
of an AI software to reach the diagnosis mentioned in the
report?
Answer Number of respondents (%)
Yes 64 34.6
No 107 57.9
Skipped 14 7.5
Total 185 100
Table 11 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Has (have)
the
algorithm(s) used for diagnostic assistance proven to be helpful
in reducing the workload for the medical staff?
Answer Number of respondents (%)
Yes 42 22.7
No 129 69.8
Skipped 14 7.5
Total 185 100
Table 12 Experience of 111 respondents with AI-based
algorithms for clinical workflow prioritisation: Has the
algorithm
proven to be helpful in reducing the workload for the medical
staff?
Answer Number of respondents (%)
Not at all helpful 16 14.4
Moderately helpful 69 62.2
Very helpful 26 23.4
Total 111 100
Page 7 of 9European Society of Radiology (ESR) Insights into
Imaging (2022) 13:107
summarises the reasons given by participants who did
not intend to acquire AI-based algorithms for their
clinical use.
Discussion
While the previous survey on AI [1] was based on the
expectations of the ESR members regarding the impact
of AI on radiology, the present survey intended to obtain
an overview of current practical clinical experience with
AI-based algorithms. Although the respondents with
practical clinical experience in this survey represent only
1% of the ESR membership, their proportion among all
respondents varied greatly among countries. The geo-
graphical distribution of the 276 radiologists who shared
their experience with such tools in clinical practice shows
that the majority was affiliated to institutions in West-
ern and Central Europe or in Scandinavia. Half of all
respondents with practical clinical experience with AI
tools was affiliated to academic institutions, whereas the
other half practiced radiology in regional hospitals or in
private services. Since it is likely that the respondents in
this survey were radiologists with a special interest in AI-
based algorithms, it cannot be assumed that this survey
reflects the true proportion of radiologists in the Euro-
pean region with practical clinical experience with AI-
based tools.
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2
1Deliverable 3 - Evaluate Research and DataAttempt 2

More Related Content

Similar to 1Deliverable 3 - Evaluate Research and DataAttempt 2

Valid and Reliable ToolsThe goal of an evaluation is to determin.docx
Valid and Reliable ToolsThe goal of an evaluation is to determin.docxValid and Reliable ToolsThe goal of an evaluation is to determin.docx
Valid and Reliable ToolsThe goal of an evaluation is to determin.docxnealwaters20034
 
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docxApplication Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docxalfredai53p
 
ARTICLEAnalysing the power of deep learning techniques ove
ARTICLEAnalysing the power of deep learning techniques oveARTICLEAnalysing the power of deep learning techniques ove
ARTICLEAnalysing the power of deep learning techniques ovedessiechisomjj4
 
Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...
Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...
Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...IRJET Journal
 
System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...
System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...
System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...IRJET Journal
 
Medic - Artificially Intelligent System for Healthcare Services ...
Medic - Artificially Intelligent System for Healthcare Services              ...Medic - Artificially Intelligent System for Healthcare Services              ...
Medic - Artificially Intelligent System for Healthcare Services ...IRJET Journal
 
Common Models in Health Informatics Evaluation.docx
Common Models in Health Informatics Evaluation.docxCommon Models in Health Informatics Evaluation.docx
Common Models in Health Informatics Evaluation.docxwrite31
 
(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...
(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...
(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...International Center for Biometric Research
 
Common Models in Health Informatics EvaluationHave you ever watche.docx
Common Models in Health Informatics EvaluationHave you ever watche.docxCommon Models in Health Informatics EvaluationHave you ever watche.docx
Common Models in Health Informatics EvaluationHave you ever watche.docxbartholomeocoombs
 
A comprehensive study on disease risk predictions in machine learning
A comprehensive study on disease risk predictions  in machine learning A comprehensive study on disease risk predictions  in machine learning
A comprehensive study on disease risk predictions in machine learning IJECEIAES
 
IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...
IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...
IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...IRJET Journal
 
12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a 12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a ChantellPantoja184
 
12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a 12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a AnastaciaShadelb
 
Pico framework for framing systematic review research questions pubrica
Pico framework for framing systematic review research questions    pubricaPico framework for framing systematic review research questions    pubrica
Pico framework for framing systematic review research questions pubricaPubrica
 
Pico framework for framing systematic review research questions - Pubrica
Pico framework for framing systematic review research questions - PubricaPico framework for framing systematic review research questions - Pubrica
Pico framework for framing systematic review research questions - PubricaPubrica
 
Shap Analysis Based Gastric Cancer Detection
Shap Analysis Based Gastric Cancer DetectionShap Analysis Based Gastric Cancer Detection
Shap Analysis Based Gastric Cancer DetectionIRJET Journal
 
Intelligent data analysis for medicinal diagnosis
Intelligent data analysis for medicinal diagnosisIntelligent data analysis for medicinal diagnosis
Intelligent data analysis for medicinal diagnosisIRJET Journal
 
Informatics and nursing 2015 2016.odette richards
Informatics and nursing 2015 2016.odette richardsInformatics and nursing 2015 2016.odette richards
Informatics and nursing 2015 2016.odette richardsOdette Richards
 
Challenges and Opportunities Around Integration of Clinical Trials Data
Challenges and Opportunities Around Integration of Clinical Trials DataChallenges and Opportunities Around Integration of Clinical Trials Data
Challenges and Opportunities Around Integration of Clinical Trials DataCitiusTech
 

Similar to 1Deliverable 3 - Evaluate Research and DataAttempt 2 (20)

Valid and Reliable ToolsThe goal of an evaluation is to determin.docx
Valid and Reliable ToolsThe goal of an evaluation is to determin.docxValid and Reliable ToolsThe goal of an evaluation is to determin.docx
Valid and Reliable ToolsThe goal of an evaluation is to determin.docx
 
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docxApplication Evaluation Project Part 1 Evaluation Plan FocusTec.docx
Application Evaluation Project Part 1 Evaluation Plan FocusTec.docx
 
ARTICLEAnalysing the power of deep learning techniques ove
ARTICLEAnalysing the power of deep learning techniques oveARTICLEAnalysing the power of deep learning techniques ove
ARTICLEAnalysing the power of deep learning techniques ove
 
Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...
Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...
Unleashing the Power of Data: Enhancing Physician Outreach through Machine Le...
 
System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...
System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...
System for Recommending Drugs Based on Machine Learning Sentiment Analysis of...
 
Medic - Artificially Intelligent System for Healthcare Services ...
Medic - Artificially Intelligent System for Healthcare Services              ...Medic - Artificially Intelligent System for Healthcare Services              ...
Medic - Artificially Intelligent System for Healthcare Services ...
 
Common Models in Health Informatics Evaluation.docx
Common Models in Health Informatics Evaluation.docxCommon Models in Health Informatics Evaluation.docx
Common Models in Health Informatics Evaluation.docx
 
(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...
(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...
(2009) A Comparison of Fingerprint Image Quality and Matching Performance bet...
 
Common Models in Health Informatics EvaluationHave you ever watche.docx
Common Models in Health Informatics EvaluationHave you ever watche.docxCommon Models in Health Informatics EvaluationHave you ever watche.docx
Common Models in Health Informatics EvaluationHave you ever watche.docx
 
A comprehensive study on disease risk predictions in machine learning
A comprehensive study on disease risk predictions  in machine learning A comprehensive study on disease risk predictions  in machine learning
A comprehensive study on disease risk predictions in machine learning
 
IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...
IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...
IRJET- Machine Learning Classification Algorithms for Predictive Analysis in ...
 
12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a 12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a
 
12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a 12Epic EMR ImplementationComment by Author 2 Need a
12Epic EMR ImplementationComment by Author 2 Need a
 
Pico framework for framing systematic review research questions pubrica
Pico framework for framing systematic review research questions    pubricaPico framework for framing systematic review research questions    pubrica
Pico framework for framing systematic review research questions pubrica
 
Pico framework for framing systematic review research questions - Pubrica
Pico framework for framing systematic review research questions - PubricaPico framework for framing systematic review research questions - Pubrica
Pico framework for framing systematic review research questions - Pubrica
 
Shap Analysis Based Gastric Cancer Detection
Shap Analysis Based Gastric Cancer DetectionShap Analysis Based Gastric Cancer Detection
Shap Analysis Based Gastric Cancer Detection
 
Intelligent data analysis for medicinal diagnosis
Intelligent data analysis for medicinal diagnosisIntelligent data analysis for medicinal diagnosis
Intelligent data analysis for medicinal diagnosis
 
Informatics and nursing 2015 2016.odette richards
Informatics and nursing 2015 2016.odette richardsInformatics and nursing 2015 2016.odette richards
Informatics and nursing 2015 2016.odette richards
 
(2009) Comparison Of Fingerprint Image Quality And Matching
(2009) Comparison Of Fingerprint Image Quality And Matching(2009) Comparison Of Fingerprint Image Quality And Matching
(2009) Comparison Of Fingerprint Image Quality And Matching
 
Challenges and Opportunities Around Integration of Clinical Trials Data
Challenges and Opportunities Around Integration of Clinical Trials DataChallenges and Opportunities Around Integration of Clinical Trials Data
Challenges and Opportunities Around Integration of Clinical Trials Data
 

More from EttaBenton28

1Comparing Humanistic-Existential Psychotherapy with Oth
1Comparing Humanistic-Existential Psychotherapy with Oth1Comparing Humanistic-Existential Psychotherapy with Oth
1Comparing Humanistic-Existential Psychotherapy with OthEttaBenton28
 
1Comment by Perjessy, Caroline Substan
1Comment by Perjessy, Caroline Substan1Comment by Perjessy, Caroline Substan
1Comment by Perjessy, Caroline SubstanEttaBenton28
 
1College Student’s DepressionJasmin LinthicumCours
1College Student’s DepressionJasmin LinthicumCours1College Student’s DepressionJasmin LinthicumCours
1College Student’s DepressionJasmin LinthicumCoursEttaBenton28
 
1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys
1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys
1Close Looking Analysis Visual ArtArtist’s Self-Portrait AnalysEttaBenton28
 
1CLA1Xueyao DongWestcliff UniversityINT 501 EF
1CLA1Xueyao DongWestcliff UniversityINT 501 EF1CLA1Xueyao DongWestcliff UniversityINT 501 EF
1CLA1Xueyao DongWestcliff UniversityINT 501 EFEttaBenton28
 
1CJ 550 2-2 Milestone One Southern New Ha
1CJ 550 2-2 Milestone One        Southern New Ha1CJ 550 2-2 Milestone One        Southern New Ha
1CJ 550 2-2 Milestone One Southern New HaEttaBenton28
 
1CLA 1Every organization aims at achieving a cer
1CLA 1Every organization aims at achieving a cer1CLA 1Every organization aims at achieving a cer
1CLA 1Every organization aims at achieving a cerEttaBenton28
 
1CHAPTER ONEThe What and Why of BudgetingAn Introduction
1CHAPTER ONEThe What and Why of BudgetingAn Introduction1CHAPTER ONEThe What and Why of BudgetingAn Introduction
1CHAPTER ONEThe What and Why of BudgetingAn IntroductionEttaBenton28
 
1Child Development Observation and Reflection
1Child Development Observation and Reflection1Child Development Observation and Reflection
1Child Development Observation and ReflectionEttaBenton28
 
1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter
1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter
1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The ExterEttaBenton28
 
1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR
1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR
1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBREttaBenton28
 
1CHAPTER 01G L O B A L R E P O R T O N2G
1CHAPTER 01G L O B A L  R E P O R T  O N2G1CHAPTER 01G L O B A L  R E P O R T  O N2G
1CHAPTER 01G L O B A L R E P O R T O N2GEttaBenton28
 
1Child Growth and DevelopmentYohana MangiaficoHous
1Child Growth and DevelopmentYohana MangiaficoHous1Child Growth and DevelopmentYohana MangiaficoHous
1Child Growth and DevelopmentYohana MangiaficoHousEttaBenton28
 
1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS
1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS
1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATSEttaBenton28
 
1Chapter Two Literature ReviewStudents NameName of the
1Chapter Two Literature ReviewStudents NameName of the1Chapter Two Literature ReviewStudents NameName of the
1Chapter Two Literature ReviewStudents NameName of theEttaBenton28
 
1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (
1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (
1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (EttaBenton28
 
1Chapter 9TelevisionBroadcast and Beyond2
1Chapter 9TelevisionBroadcast and Beyond21Chapter 9TelevisionBroadcast and Beyond2
1Chapter 9TelevisionBroadcast and Beyond2EttaBenton28
 
1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.
1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.
1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.EttaBenton28
 
1Chapter 6Newspapers and the NewsReflections of a
1Chapter 6Newspapers and the NewsReflections of a1Chapter 6Newspapers and the NewsReflections of a
1Chapter 6Newspapers and the NewsReflections of aEttaBenton28
 
1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·
1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·
1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·EttaBenton28
 

More from EttaBenton28 (20)

1Comparing Humanistic-Existential Psychotherapy with Oth
1Comparing Humanistic-Existential Psychotherapy with Oth1Comparing Humanistic-Existential Psychotherapy with Oth
1Comparing Humanistic-Existential Psychotherapy with Oth
 
1Comment by Perjessy, Caroline Substan
1Comment by Perjessy, Caroline Substan1Comment by Perjessy, Caroline Substan
1Comment by Perjessy, Caroline Substan
 
1College Student’s DepressionJasmin LinthicumCours
1College Student’s DepressionJasmin LinthicumCours1College Student’s DepressionJasmin LinthicumCours
1College Student’s DepressionJasmin LinthicumCours
 
1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys
1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys
1Close Looking Analysis Visual ArtArtist’s Self-Portrait Analys
 
1CLA1Xueyao DongWestcliff UniversityINT 501 EF
1CLA1Xueyao DongWestcliff UniversityINT 501 EF1CLA1Xueyao DongWestcliff UniversityINT 501 EF
1CLA1Xueyao DongWestcliff UniversityINT 501 EF
 
1CJ 550 2-2 Milestone One Southern New Ha
1CJ 550 2-2 Milestone One        Southern New Ha1CJ 550 2-2 Milestone One        Southern New Ha
1CJ 550 2-2 Milestone One Southern New Ha
 
1CLA 1Every organization aims at achieving a cer
1CLA 1Every organization aims at achieving a cer1CLA 1Every organization aims at achieving a cer
1CLA 1Every organization aims at achieving a cer
 
1CHAPTER ONEThe What and Why of BudgetingAn Introduction
1CHAPTER ONEThe What and Why of BudgetingAn Introduction1CHAPTER ONEThe What and Why of BudgetingAn Introduction
1CHAPTER ONEThe What and Why of BudgetingAn Introduction
 
1Child Development Observation and Reflection
1Child Development Observation and Reflection1Child Development Observation and Reflection
1Child Development Observation and Reflection
 
1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter
1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter
1CHAPTER4BUSINESS-LEVEL STRATEGYChapter 2The Exter
 
1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR
1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR
1CHEMICAL, BIOLOGICAL, RADIOLOGICAL AND NUCLEAR (CBR
 
1CHAPTER 01G L O B A L R E P O R T O N2G
1CHAPTER 01G L O B A L  R E P O R T  O N2G1CHAPTER 01G L O B A L  R E P O R T  O N2G
1CHAPTER 01G L O B A L R E P O R T O N2G
 
1Child Growth and DevelopmentYohana MangiaficoHous
1Child Growth and DevelopmentYohana MangiaficoHous1Child Growth and DevelopmentYohana MangiaficoHous
1Child Growth and DevelopmentYohana MangiaficoHous
 
1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS
1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS
1CHAPTER2THE EXTERNAL ENVIRONMENTOPPORTUNITIES, THREATS
 
1Chapter Two Literature ReviewStudents NameName of the
1Chapter Two Literature ReviewStudents NameName of the1Chapter Two Literature ReviewStudents NameName of the
1Chapter Two Literature ReviewStudents NameName of the
 
1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (
1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (
1CHAPTER 6 CHINAChinaBook ReferenceTerrill, R. J. (
 
1Chapter 9TelevisionBroadcast and Beyond2
1Chapter 9TelevisionBroadcast and Beyond21Chapter 9TelevisionBroadcast and Beyond2
1Chapter 9TelevisionBroadcast and Beyond2
 
1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.
1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.
1CHAPTER 5 RUSSIARussiaBook ReferenceTerrill, R. J.
 
1Chapter 6Newspapers and the NewsReflections of a
1Chapter 6Newspapers and the NewsReflections of a1Chapter 6Newspapers and the NewsReflections of a
1Chapter 6Newspapers and the NewsReflections of a
 
1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·
1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·
1CHAPTER 4 SOUTH AFRICA South AfricaConcepts to Know·
 

Recently uploaded

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 

Recently uploaded (20)

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 

1Deliverable 3 - Evaluate Research and DataAttempt 2

  • 1. 1 Deliverable 3 - Evaluate Research and Data Attempt 2 Jamie Raines Rasmussen College HSA5000CBE Section 01CBE Scholarly Research and Writing Caroline Gulbrandsen 9/1/2022 2 Research Question Evaluation The Credibility of the Data The research question integrated into this study is related to how artificial intelligence (AI) integration in clinical radiology has the potential to disrupt the industry. According to Becker et al. (2022), AI is strongly connected to the operations performed in clinical radiology for better test
  • 2. results. The technology allows machines to achieve human-level performance while detecting tumors during radiology tests. There are suitable improvements performed in the AI industry using research that structures technical operations by machines to validate the Integration of AI algorithms for patient care. The credibility of the data in the article is reliable since the authors researched how healthcare professionals who have used artificial intelligence have promoted better health management. The European Society of Radiology has accredited all the authors due to their degrees and advanced educational levels (Becker et al., 2022). In the other article, the authors also integrate credibility since workers have experience in hospitals and are at a university level of education (Mulryan et al., 2022). Most research participants agreed that AI integration in radiology information technology (IT) departments has promoted accuracy and reduced excess time for setting up systems. The next article focused on the use of AI for medical imaging, whereby it is clear that the
  • 3. demand for AI is constantly progressing. According to Mulryan et al. (2022), the advancements in AI have been adverse, causing some radiologists to develop a negative attitude towards the industry that could potentially eliminate human jobs. AI has been found to simulate human brain capacity, which does not get received well by professionals in healthcare settings. This indicates more operations can get performed to validate AI operations since they are needed for system management. The article's data was collected from journalists, radiologists, commercial Robert Neuteboom hold advanced degrees in a relevant field - is this what you mean? Robert Neuteboom Robert Neuteboom presented as credible Robert Neuteboom Robert Neuteboom Robert Neuteboom As I mentioned in my comment on your previous submission, you need to use these terms separately in your evaluation. Saying that credibility is reliable does not describe either of
  • 4. these terms nor does it offer examples. Your claim about conducting research of existing studies would fall under credibility. You might also talk about the authors' credentials, employment, and affiliations -- anything that demonstrates trustworthiness. Reliability, conversely, has to do with consistency across items, time, and researchers, inter-rater alignment, and replicability. Use these terms accurately and provide specific examples for both. Robert Neuteboom Robert Neuteboom 3 representatives, researchers, and non-radiologist doctors, and all provided different opinions on the impact of AI in medical imaging. Other health experts also provided their knowledge as long as they received advanced education for their professions. Documentation of the Data All the articles integrate a high-quality data documentation process since there are different topics, use of graphics, and graphs that explain how the research was conducted. There was a direct process to determine that the articles were quantitative since there was a comparison among different variables. In the article by Becker et al. (2022), the
  • 5. data analyzed how different respondents reacted to the value of AI in clinical radiology. The data was then recorded in tables under different questions so that there would be an accurate analysis outcome by finding out the number of responses that supported or did not support concepts mentioned in a research question. In the same article, an example of accurate documentation is a graph that indicated data from clinical radiologists on why they did not acquire certified AI- algorism expertise. In the second article, Mulryan et al. (2022) offered accurate data representations using schematics that integrated different reactions and the number of participants by applying a logistic regression model to determine the data's validity. There was an accurate quantitative research method by analyzing how different participants were used in the research and how their answers got distributed. The study was direct and only required participants to answer questions during survey sessions while providing their data for easy identification. Collecting data from different radiologists was applied to promote the study's reliability since
  • 6. the data can get assessed by any professional and produce a health management standard. Evaluation of Data Analysis and Interpretation Robert Neuteboom promotes Robert Neuteboom Robert Neuteboom be Robert Neuteboom Robert Neuteboom different reactions from participants? Clarify what reactions you are describing here. Robert Neuteboom the Robert Neuteboom of outcomes Robert Neuteboom , ensuring their contributions were credible. Robert Neuteboom insights on the topic Robert Neuteboom
  • 7. 4 The data collected from the articles support a hypothesis developed for the study: conforming to AI practices in radiology is imperative since they inevitably affect healthcare delivery. The future of AI appears to be getting more advanced, especially for the radiology industry, which integrates a structure for change in handling system operations. It can be possible to handle superintelligence operations based on AI's ability to mimic human clinical radiologists' behavior. Becker et al. (2022) introduced expertise in handling AI operations advancements based on the capability to correlate ideas for generating better data analysis. The existence of AI for clinical radiology is a structure connected to the change of different aspects in the healthcare environment due to its capability to manage system operations. There is thus a structure for change in terms of better machine operations for patient healthcare improvement. Management of human beings' acceptance of AI is required for the best procedure of offering intelligence improvement for a safe healthcare future (Mulryan et al., 2022). These authors
  • 8. adopted a document analysis process by seeking credible sources on how radiology operations get performed. Possible Ethical Issues In conclusion, it would be possible to correlate real -life positive healthcare outcomes and the data provided by persons familiar with the area of interest. There are conditions required to improve AI operations, including algorithm management and logic handling, which are imperative to support healthcare expertise by clinical radiologists as they learn to perform AI operations. While performing any personal study, there can be a constricted method when attempting to understand how to seek factual data without the integration of plagiarism of original information. Another issue can be obtaining informed consent from professionals in the radiology industry. It is easy to find final work posted online, yet communicating with the developers and ensuring they allow their work to be used in research can be challenging. Confidentiality is another requirement Robert Neuteboom
  • 9. What do you mean here? Clarify. Robert Neuteboom Robert Neuteboom Robert Neuteboom are Robert Neuteboom results of improving Robert Neuteboom Robert Neuteboom technology Robert Neuteboom Robert Neuteboom Human beings must accept Robert Neuteboom Robert Neuteboom Robert Neuteboom What is "It" in reference to? Be clear. Robert Neuteboom those practices 5 that needs to be analyzed to seek information on how to protect
  • 10. the original owners of any piece of work without appearing to steal data. All these possible issues can be addressed by conducting thorough research and assessing the required topic. Robert Neuteboom Good - yes, generally speaking, these are categories we would label ethical issues in relation to a study. Now, speak specifically about your study. You are welcome to use first- person pronouns here to discuss specific concerns that may arise in the study you wrote about in Deliverable 1 and 2. 6 References Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L. (2022). Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology. Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y. Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C., Ryan, D., & McLaughlin, P. et al. (2022). An evaluation of information online on artificial intelligence in medical imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244- 022-01209-4
  • 11. Pharmacy Sterile and Nonsterile Compounding# Sterile Compounding Procedures Question 1 (5 points) Which chapters of the USP are applicable to nutritional supplements? Question 1 options: A) Chapters 2000 and later B) Chapters 50 through 250 C) Chapters 1000 and later D) Chapters 1500 through 1999 Question 2 (5 points) When administering a TPN, the bag should be discarded how often? Question 2 options: A) After it finishes running B)
  • 12. Within 12 hours C) As ordered by the physician D) Within 24 hours Question 3 (5 points) In which locations should sterile supplies be removed from their outer wrappings? Question 3 options: A) In the ante area B) At the edge of the hood C) In the buffer area D) Six inches inside the hood
  • 13. Question 4 (5 points) Which of the following is responsible for creating the compounding record (CR) for each CSP? Question 4 options: A) Physician B) Pharmacy Technician C) Pharmacist D) Compounder Question 5 (5 points) Which of the following microbial growth mediums would be used to sample for fungi bacterial growth in the sterile environment? Question 5 options: A) Bacteriological agar
  • 14. B) Trypticase soy agar C) Tryptone glucose extract agar D) MacConkey agar Question 6 (5 points) When a nonsterile product is used in the preparation of a Category 2 CSP Question 6 options: A) the assigned BUD is 10 days when refrigerated. B) the assigned BUD is 4 days at room temperature. C) endotoxin testing must be performed. D) sterility testing must be performed.
  • 15. Question 7 (5 points) 500 mL of 20% Liposyn provides Question 7 options: A) 1000 calories per day. B) 2000 calories per day. C) 1500 calories per day. D) 500 calories per day. Question 8 (5 points) This critical site must always be exposed to first air and swabbed with a sterile IPA pad. Question 8 options: A) Syringe tip B) Bag port
  • 16. C) Needle hub D) Syringe plunger Question 9 (5 points) A ___ should be placed on a CSP to ensure it hasn't been compromised during transport. Question 9 options: A) sterile cap B) IV port seal C) IV port cover D) tamper-proof seal
  • 17. Question 10 (5 points) The temperature of the refrigerator used to store CSP must be checked ___ to ensure the stability of the CSP. Question 10 options: A) monthly B) daily C) hourly D) weekly Question 11 (5 points) Growth media used for surface sampling must be inverted and placed in an incubator for ___ at 20° to 25° C followed by an additional 2 to 3 days at 30° to 35° C. Question 11 options:
  • 18. A) 5 days B) 7 days C) 14 days D) 3 days Question 12 (5 points) Calculations a technician performs for the CSP should be located on the Question 12 options: A) master formulation record (MFR). B) quality assurance (QA). C) compounding record (CR). D) standard operating procedure (SOP).
  • 19. Question 13 (5 points) In an ISO Class 8 environment, the air contains no more than ___ particles of 0.5 microns per cubic meter. Question 13 options: A) 100,000 B) 150,000 C) 200,000 D) 125,000 Question 14 (5 points) In which environment should documentation of CSPs be performed? Question 14 options: A) ISO Class 5
  • 20. B) ISO Class 9 C) ISO Class 7 D) ISO Class 8 Question 15 (5 points) Surface sampling of the interior of a PEC in a SCA is required ___ Question 15 options: A) quarterly. B) annually. C) every six months.
  • 21. D) monthly. Question 16 (5 points) Certification of an ISO Class 5, 7, or 8 compounding environment includes Question 16 options: A) airflow testing, HEPA filter integrity testing, total particle counts, and smoke studies. B) HEPA filter integrity testing, total particle counts, and smoke studies. C) airflow testing, HEPA filter integrity testing, technician inspection, and smoke studies. D) evacuation testing, HEPA filter integrity testing, total particle counts, and technician inspection. Question 17 (5 points)
  • 22. Before withdrawing fluid from a vial using a standard non- vented needle, which of the following should be done to prevent creating a vacuum in the vial? Question 17 options: A) Inject a volume of air that's half of the amount of liquid withdrawn. B) Nothing needs to be done beforehand; the fluid can be withdrawn without adding air. C) Inject a volume of air equal to the amount of liquid being withdrawn. D) Inject a volume of air greater than the amount of liquid being withdrawn. Question 18 (5 points) A technician is staging the final IV preparation. This is done for the pharmacist to _____ the admixture. Question 18 options: A) deliver B)
  • 23. administer C) dispose of D) check Question 19 (5 points) ___ is an example of a fat source used in parenteral solutions. Question 19 options: A) Normal saline B) Liposyn C) Dextrose D) Aminosyn Question 20 (5 points)
  • 24. Visual inspection of the CSP should be performed ___ the final pharmacist check. Question 20 options: A) during B) before C) after D) while 1 Deliverable 3 - Evaluate Research and Data Attempt 2 Jamie Raines Rasmussen College HSA5000CBE Section 01CBE Scholarly Research and Writing
  • 25. Caroline Gulbrandsen 9/1/2022 2 Research Question Evaluation The Credibility of the Data The research question integrated into this study is related to how artificial intelligence (AI) integration in clinical radiology has the potential to disrupt the industry. According to Becker et al. (2022), AI is strongly connected to the operations performed in clinical radiology for better test results. The technology allows machines to achieve human-level performance while detecting tumors during radiology tests. There are suitable improvements performed in the AI industry using research that structures technical operations by machines to validate the Integration of AI algorithms for patient care. The credibility of the data in the article is reliable since the authors researched how healthcare professionals who have used artificial intelligence have promoted better
  • 26. health management. The European Society of Radiology has accredited all the authors due to their degrees and advanced educational levels (Becker et al., 2022). In the other article, the authors also integrate credibility since workers have experience in hospitals and are at a university level of education (Mulryan et al., 2022). Most research participants agreed that AI integration in radiology information technology (IT) departments has promoted accuracy and reduced excess time for setting up systems. The next article focused on the use of AI for medical imaging, whereby it is clear that the demand for AI is constantly progressing. According to Mulryan et al. (2022), the advancements in AI have been adverse, causing some radiologists to develop a negative attitude towards the industry that could potentially eliminate human jobs. AI has been found to simulate human brain capacity, which does not get received well by professionals in healthcare settings. This indicates more operations can get performed to validate AI operations since they are needed for system
  • 27. management. The article's data was collected from journal ists, radiologists, commercial Robert Neuteboom hold advanced degrees in a relevant field - is this what you mean? Robert Neuteboom Robert Neuteboom presented as credible Robert Neuteboom Robert Neuteboom Robert Neuteboom As I mentioned in my comment on your previous submission, you need to use these terms separately in your evaluation. Saying that credibility is reliable does not describe either of these terms nor does it offer examples. Your claim about conducting research of existing studies would fall under credibility. You might also talk about the authors' credentials, employment, and affiliations -- anything that demonstrates trustworthiness. Reliability, conversely, has to do with consistency across items, time, and researchers, inter-rater alignment, and replicability. Use these terms accurately and provide specific examples for both. Robert Neuteboom Robert Neuteboom 3
  • 28. representatives, researchers, and non-radiologist doctors, and all provided different opinions on the impact of AI in medical imaging. Other health experts also provided their knowledge as long as they received advanced education for their professions. Documentation of the Data All the articles integrate a high-quality data documentation process since there are different topics, use of graphics, and graphs that explain how the research was conducted. There was a direct process to determine that the articles were quantitative since there was a comparison among different variables. In the article by Becker et al . (2022), the data analyzed how different respondents reacted to the value of AI in clinical radiology. The data was then recorded in tables under different questions so that there would be an accurate analysis outcome by finding out the number of responses that supported or did not support concepts mentioned in a research question. In the same article, an example of accurate documentation is a graph that indicated data from clinical radiologists on why they did not acquire certified AI-
  • 29. algorism expertise. In the second article, Mulryan et al. (2022) offered accurate data representations using schematics that integrated different reactions and the number of participants by applying a logistic regression model to determine the data's validity. There was an accurate quantitative research method by analyzing how different participants were used in the research and how their answers got distributed. The study was direct and only required participants to answer questions during survey sessions while providing their data for easy identification. Collecting data from different radiologists was applied to promote the study's reliability since the data can get assessed by any professional and produce a health management standard. Evaluation of Data Analysis and Interpretation Robert Neuteboom promotes Robert Neuteboom Robert Neuteboom be
  • 30. Robert Neuteboom Robert Neuteboom different reactions from participants? Clarify what reactions you are describing here. Robert Neuteboom the Robert Neuteboom of outcomes Robert Neuteboom , ensuring their contributions were credible. Robert Neuteboom insights on the topic Robert Neuteboom 4 The data collected from the articles support a hypothesis developed for the study: conforming to AI practices in radiology is imperative since they inevitably affect healthcare delivery. The future of AI appears to be getting more advanced, especially for the radiology industry, which integrates a structure for change in handling system operations. It can be possible to handle superintelligence operations based on AI's ability to
  • 31. mimic human clinical radiologists' behavior. Becker et al. (2022) introduced expertise in handling AI operations advancements based on the capability to correlate ideas for generating better data analysis. The existence of AI for clinical radiology is a structure connected to the change of different aspects in the healthcare environment due to its capability to manage system operations. There is thus a structure for change in terms of better machine operations for patient healthcare improvement. Management of human beings' acceptance of AI is required for the best procedure of offering intelligence improvement for a safe healthcare future (Mulryan et al., 2022). These authors adopted a document analysis process by seeking credible sources on how radiology operations get performed. Possible Ethical Issues In conclusion, it would be possible to correlate real -life positive healthcare outcomes and the data provided by persons familiar with the area of interest. There are conditions required to improve AI operations, including algorithm management and logic handling, which are imperative
  • 32. to support healthcare expertise by clinical radiologists as they learn to perform AI operations. While performing any personal study, there can be a constricted method when attempting to understand how to seek factual data without the integration of plagiarism of original information. Another issue can be obtaining informed consent from professionals in the radiology industry. It is easy to find final work posted online, yet communicating with the developers and ensuring they allow their work to be used in research can be challenging. Confidentiality is another requirement Robert Neuteboom What do you mean here? Clarify. Robert Neuteboom Robert Neuteboom Robert Neuteboom are Robert Neuteboom results of improving Robert Neuteboom Robert Neuteboom
  • 33. technology Robert Neuteboom Robert Neuteboom Human beings must accept Robert Neuteboom Robert Neuteboom Robert Neuteboom What is "It" in reference to? Be clear. Robert Neuteboom those practices 5 that needs to be analyzed to seek information on how to protect the original owners of any piece of work without appearing to steal data. All these possible issues can be addressed by conducting thorough research and assessing the required topic. Robert Neuteboom Good - yes, generally speaking, these are categories we would label ethical issues in relation to a study. Now, speak specifically about your study. You are welcome to use first- person pronouns here to discuss specific concerns that may arise in the study you wrote about in Deliverable 1 and 2.
  • 34. 6 References Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L. (2022). Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology. Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y. Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C., Ryan, D., & McLaughlin, P. et al. (2022). An evaluation of information online on artificial intelligence in medical imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244- 022-01209-4 1 Deliverable 3 - Evaluate Research and Data Jamie Raines Attempt 1 Rasmussen College HSA5000CBE Section 01CBE Scholarly Research and Writing
  • 35. Caroline Gulbrandsen 8/26/2022 Running head: RESEARCH QUESTION EVALUATION 2 Research Question Evaluation The credibility of the Data The research question integrated into this study is related to how artificial intelligence (AI) integration in clinical radiology has the potential to disrupt the industry. According to Becker et al. (2022), AI is strongly connected to the operations performed in clinical radiology for better test results. The technology allows machines can achieve human-level performance while performing detection of any tumors during radiology tests. There are suitable improvements performed in the AI industry using research that structures technical operations by machines to validate the Integration of AI algorithms for patient care. The credibility of the data in the article is reliable since the authors researched how healthcare professionals who have used artificial intelligence have promoted better health management. Most research participants agreed that AI integration in radiology
  • 36. information technology (IT) departments has promoted accuracy and reduced excess time for setting up systems. The next article focused on the use of AI for medical imaging, whereby it is clear that the demand for AI is constantly progressing. According to Mulryan et al. (2022), the advancements in AI have been adverse, causing some radiologists to develop a negative attitude towards the industry that could potentially eliminate human jobs. AI has been found to simulate human brain capacity, which does not get received well by professionals in healthcare settings. This indicates more operations can get performed to validate AI operations since they are needed for system management. The article's data was collected from journalists, radiologists, commercial representatives, researchers, and non-radiologist doctors, and all provided different opinions on the impact of AI in medical imaging. Documentation of the Data All the articles integrate a high-quality data documentation process since there are different topics, use of graphics, and graphs that explain how the research got conducted. There was a direct process to determine that the articles were quantitative since there was a comparison among
  • 37. different variables. In the article by Becker et al. (2022), the data analyzed how different respondents reacted to the value of AI in clinical radiology. The Data then Robert Neuteboom was Robert Neuteboom Robert Neuteboom Robert Neuteboom Would you say experts in the field constitute another example of credibility? Robert Neuteboom Okay, so be sure to address matters of credibility and reliability independently. Your claim about conducting research of existing studies would fall under credibility. You might also talk about the authors' credentials, employment, and affiliations -- anything that demonstrates trustworthiness. Reliability, conversely, has to do with consistency across items, time, and researchers, inter-rater alignment, and replicability. Use these terms accurately and provide specific examples for both. Robert Neuteboom Robert Neuteboom Robert Neuteboom to Robert Neuteboom
  • 38. Robert Neuteboom Capitalize - Credibility Robert Neuteboom Remove your running header. These are not necessary in APA 7th Edition. Robert Neuteboom Check your margins on this paper. They should be set at one inch. Running head: RESEARCH QUESTION EVALUATION 3 got recorded in tables under different questions so that there would be an accurate analysis outcome by finding out the number of responses that supported or did not support concepts mentioned in a research question. In the same article, an example of accurate documentati on is a graph that indicated data from clinical radiologists on why they did not acquire certified AI-algorism expertise. In the second article, Mulryan et al. (2022) offered accurate data representations using schematics that integrated different reactions and the number of participants by applying a logistic regression model to determine the Data's validity. The use of key terms was possible upon opening the document as it was possible to detect that the application of the quantitative research method was common by indicating a different number of participants
  • 39. used in the research and how their answers got distributed. The study was direct and only required participants to answer questions during survey sessions while providing their data for easy identification. Collecting data from different radiologists was applied to promote the study's validity. Evaluation of Data Analysis and Interpretation The data collected from the articles support a hypothesis developed for the study: it is imperative to conform to AI practices in radiology since they affect inevitable healthcare delivery. The future of AI appears to be getting more advanced, especially for the radiology industry, which integrates a structure for change in handling system operations. It can be possible to handle superintelligence operations based on AI's ability to mimic human clinical radiologists' behavior. The authors introduced the level of expertise in handling AI operations advancements based on the capability to correlate ideas for generating better data analysis. The existence of AI for clinical radiology is a structure connected to the change of different aspects in the healthcare environment due to its capability to manage system operations. There is thus a structure for change in terms of
  • 40. better machine operations for patient healthcare improvement. Management of human beings' acceptance of AI is required for the best procedure of offering intelligence improvement for a safe healthcare future. Possible Ethical Issues Robert Neuteboom I can't tell which article you are writing about. Describe how Becker et al. and Mulryan et al. analyze their data. Be clear by differentiating the two studies. Robert Neuteboom inevitably affect Robert Neuteboom Robert Neuteboom So, does this constitute credibility or reliability? Explain. Robert Neuteboom Wordy, convoluted sentence. Rework for clarity. Robert Neuteboom Robert Neuteboom Robert Neuteboom data's - no need to capitalize. Robert Neuteboom was Robert Neuteboom
  • 41. Robert Neuteboom Running head: RESEARCH QUESTION EVALUATION 4 In conclusion, it would be possible to correlate real -life positive healthcare outcomes and the data provided by persons familiar with the area of interest. There are conditions required to improve AI operations, including algorithm management and logic handling, which are all imperative to support healthcare expertise by clinical radiologists as they learn to perform AI operations. All these are appropriate techniques for promoting accurate system handling as an accurate structure for AI integration into clinical radiology and medical imaging. Digitization of healthcare imaging operations affects existent automation operations that seek to engage people with the technical environment expertise in operations structured to integrate better machine learning processes as set up using clinical radiology expertise. The authors thus were affected by the influx of too much technology in healthcare that may seem to undermine the expertise of healthcare providers. It is thus critical to operate in the current advanced IT environment using the training of clinical radiologists instead of solely trusting AI. Applying
  • 42. social constructs and views from clinical radiologists is thus imperative to constantly manage AI operations related to accurate system improvements. Robert Neuteboom I think you misread the instructions for this third section. You are supposed to write about the ethical issues you might encounter conducting your own study (the one you are writing about this quarter) and explain how you will address those issues. Robert Neuteboom Why is this entire section centered? Running head: RESEARCH QUESTION EVALUATION 5 References Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L. (2022). Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology. Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y. Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C., Ryan, D., & McLaughlin, P. et al. (2022). An evaluation of information online on artificial intelligence in medical imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01209-4
  • 43. Mulryan et al. Insights into Imaging (2022) 13:79 https://doi.org/10.1186/s13244-022-01209-4 O R I G I N A L A R T I C L E An evaluation of information online on artificial intelligence in medical imaging Philip Mulryan1, Naomi Ni Chleirigh2, Alexander T. O’Mahony2* , Claire Crowley1, David Ryan3, Patrick McLaughlin4, Mark McEntee2, Michael Maher1,2 and Owen J. O’Connor1,2 Abstract Background: Opinions seem somewhat divided when considering the effect of artificial intelligence (AI) on medi - cal imaging. The aim of this study was to characterise viewpoints presented online relating to the impact of AI on the field of radiology and to assess who is engaging in this discourse. Methods: Two search methods were used to identify online information relating to AI and radiology. Firstly, 34 terms were searched using Google and the first two pages of results for each term were evaluated. Secondly, a Rich Search Site (RSS) feed evaluated incidental information over 3 weeks. Webpages were evaluated and categorized as having a positive, negative, balanced, or neutral viewpoint based on study criteria. Results: Of the 680 webpages identified using the Google search engine, 248 were deemed relevant and accessible. 43.2% had a positive viewpoint, 38.3% a balanced viewpoint,
  • 44. 15.3% a neutral viewpoint, and 3.2% a negative view- point. Peer-reviewed journals represented the most common webpage source (48%), followed by media (29%), com- mercial sources (12%), and educational sources (8%). Commercial webpages had the highest proportion of positive viewpoints (66%). Radiologists were identified as the most common author group (38.9%). The RSS feed identified 177 posts of which were relevant and accessible. 86% of posts were of media origin expressing positive viewpoints (64%). Conclusion: The overall opinion of the impact of AI on radiology presented online is a positive one. Consistency across a range of sources and author groups exists. Radiologists were significant contributors to this online discussion and the results may impact future recruitment. Keywords: Artificial intelligence in radiology, Perspectives on evolution of radiology, Future impact on the radiologist, Radiology recruitment, Radiology efficiency © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
  • 45. permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Keypoints • Consensus? • An overall positive opinion exists online towards AI on the future of radiology. • Radiologists? • A high proportion of radiologists believe there will be a positive impact. Background Artificial intelligence (AI) involves the use of computer algorithms to perform tasks typically associated with human intelligence [1]. The role of AI in medical imag- ing has progressed to various stages of development, application and refinement over the past 10–15 years. Consequentially, publications on AI in medical imaging have exponentially increased from about 100–150 per year in 2007–2008 to 700–800 per year in 2016–2017 [2]. Several studies pertaining to dermatology, pathol - ogy, and ophthalmology have shown the potential and clinical utility of AI algorithms. For example, skin cancer, the most diagnosed malignancy worldwide, is primarily Open Access Insights into Imaging *Correspondence: [email protected] 2 University College Cork, Cork, Ireland Full list of author information is available at the end of the
  • 46. article http://orcid.org/0000-0001-9110-2579 http://creativecommons.org/licenses/by/4.0/ http://crossmark.crossref.org/dialog/?doi=10.1186/s13244-022- 01209-4&domain=pdf Page 2 of 11Mulryan et al. Insights into Imaging (2022) 13:79 diagnosed visually. Deep neural networks (DNN) have demonstrated equivalence with consultant dermatologist diagnostic ability [3]. Hence the early evolution of AI has leaned towards the visual sciences and its application to radiology an extension of this. Medical imaging interpretation requires accuracy, pre- cision, and fidelity. At its essence it is a visual science whereby the interpreter translates either a single or series of images into a succinct report to answer a clinical ques- tion and guide evidence-based management. Studies report that on average a radiologist must interpret one image every 3–4 s in an 8-h workday to meet workload demands [4] and with the compound annual growth rate (CAGR) of diagnostic imaging estimated to be 5.4% until 2027 [5] increasing workloads are expected. Burnout has been ubiquitously reported among medical special- ties (~ 25–60%) [6, 7] with limited solutions being pro- posed and implemented; thus many key advantages may be conferred by the incorporation of AI into radiologi - cal practice. The applications of AI in radiology can be broadly divided into diagnostic and logistic. Computer- aided diagnostics (CAD) may facilitate earlier detection of abnormalities, improve patient outcomes, reduce med- ico-legal exposure, and decrease radiologist workload.
  • 47. Logistic improvements would include optimization of workflow, prompt communication of critical findings and more efficient vetting and triage systems. Historically, apprehension has existed concerning recruitment within the medical and radiological commu- nity as a result of AI. Focused assessment of individual stakeholder groups in relation to AI in radiology dem- onstrated a wide spectrum of opinion. Studies of medi- cal student perspectives in North America, Europe and the United Kingdom conveyed heterogenous opinions on the potential implications of AI on radiology possibly with geographical variation [8–10]. A recurrent theme in early studies is the large educational gap in medical schools regarding the capability, utility, and limitations of AI. A European multi-centre study of both radiolo- gists in training and consultants performed in France [11] demonstrated an overall positive perspective; how- ever, a majority expressed concerns regarding insufficient information on AI and its potential implications. Ten years ago, the end of radiology as a career was being her - alded. Hence radiology residency applications reduced in response to concerns about the future of radiology as a career [12]. Ten years ago, perception probably reflected local concerns in the absence of experience. It has been shown that more positive opinions have been expressed by those medical students with exposure to AI and radi- ology [9]. The transition from discourse about the poten- tial of AI to its integration and use should have modified opinions based on practice and experience. Therefore, this paper aimed to quantify the proportion of positive, negative, balanced, and neutral viewpoints presented on the internet in relation to the impact of AI on radiology. The purpose of this was to determine the global and regional perception of AI in radiology, and
  • 48. thus, conclude as to where the future of radiology may lie. Methods Data collection Two search methods were used to evaluate information online relating to artificial intelligence in medical imag- ing. The first search method screened existing data on AI in radiology at the time of search. The second method identified a live stream of articles relating to AI as they were released on the web. Searches were carried out independently by two of the investigators. Thirty-four key search phrases were established (Addi- tional file 1: Appendix 1). Phrases were generated with input from a medical student, healthcare professional, non-consultant hospital doctor and prospective radiol- ogy trainee. These phrases were than validated by two consultant radiologists (M.M.M., O.J.O’C). The phrases were chosen to reflect a broad range of search terms encompassing a multidisciplinary opinion to the impact of AI on the radiology service. Data identification Existing data This search was performed on ‘All’ content in the Google search engine and was conducted over the period 25th January 2021–7th February 2021. The google search engine has over 90% of the market share and thus was felt to be reasonably representative of the population on a global scale [13]. The google search was performed for the 34 key phrases in an identical manner. Results were limited to the English language and open access aca- demia or where no financial stipulation was required to access the article. We reviewed the first two pages of Google results for these searches, as numerous studies on
  • 49. user behaviour have indicated that 95% of users choose websites listed on the first page of results, leaving only 5% reviewing results on any subsequent page [14–18]. While date of publication was not a selection criterion all included articles from the google search were created within the past 5 years. Live stream A Rich Site Summary (RSS) feed search strategy was used to evaluate the written incident information over a 3-week period (07/03/21–28/03/21) as a surrogate for postings on news media and social media. The same 34 key phrases were entered into Google Alerts. This Page 3 of 11Mulryan et al. Insights into Imaging (2022) 13:79 provided a continuous search for new relevant online content appearing subsequently. This content was then analyzed and organized appropriately. Data sourcing The source of each relevant post was identified. The source website was then assigned a sub-type based on the ‘About Us’ section. The source subtypes were segregated as either journal, media, commercial, education or other if outside of these categories. For published academia, it was noted whether it was from a peer- reviewed and/ or indexed journal (PUBMED). Where an identifiable author existed, it was subtyped into radiologist, journal- ist, non-radiologist doctor, radiographer and other. The geographical origin and date of issue was also noted, where available.
  • 50. Data categorization The web pages identified by the dichotomized search strategy were analyzed by each investigator homog- enously. Firstly, all Google advertisements were omitted. Each post was then categorized as either relevant or non- relevant. Non-relevant posts included those failing to provide information on AI in medical imaging (such as a journal calling for abstracts/submissions) or academia related posts that were not open access, duplicate posts or posts that were inaccessible. Relevant posts were divided as either having an over- all positive, negative, balanced, or neutral viewpoint. The assessment and categorization of this information was carried out by two senior authors (M.M.M., O.J.O.C), both of whom are academic consultant radiologists working in a large teaching hospital. The assessment was done in tandem, and the final decision was arrived at by consensus. Relevance Positive Positive viewpoints were themed as changes brought about by AI which would result in increased employ- ment, service expansion, efficiency, fidelity of interpreta- tion, improved patient care, better quality assurance and more job satisfaction. Additional file 1: Appendix 2 pro- vides a sample of positive viewpoints as extracted from the data of included posts. Webpages that contained predominantly positive information and concluded with an overriding positive viewpoint were categorized as ‘Positive’. Negative Negative viewpoints were those that displayed a contrary theme to the positive viewpoint (see Additional file 1:
  • 51. Appendix 2). Balanced and neutral Webpages categorized as ‘Balanced’ listed comparable amounts of positive and negative points without giv- ing an overall positive or negative viewpoint. Webpages categorized as ‘Neutral’ objectively presented informa- tion relating to artificial intelligence and radiology but did not discuss how this would impact, be it negatively or positively, on the field of radiology. The fundamental difference between the ‘Balanced’ and ‘Neutral’ catego- ries is that balanced webpages explicitly discussed how aspects of artificial intelligence would impact the field of radiology while neutral webpages did not (see Addi - tional file 1: Appendix 2). Data analysis Data compilation and statistical analyses were per- formed using Microsoft Excel (Microsoft Corporation, Redmond, Washington, USA) and Google Sheets (1600 Amphitheatre Parkway, Mountain View, California, United States). Descriptive statistics were used to sum- marize data. Frequency analyses were performed for categorical variables. Results A total of 680 Google pages relating to AI in medi- cal imaging were identified. Of these, 561 pages were deemed relevant and accessible. Duplicate pages were removed, leaving 248 pages for evaluation. Forty-three percent (n = 106) of these pages expressed the overall view that AI would have a positive impact on the radiologist and the radiology department; 3.2% (n = 8) presented an overall negative viewpoint; 38.2% (n = 95) presented a balanced viewpoint and 15.3%
  • 52. (n = 38) presented a neutral viewpoint (see Fig. 1). Forty-eight percent (n = 120) of the relevant pages were from open-access peer-reviewed journals; 30.2% (n = 75) were from media sources; 12.9% (n = 32) from commer- cial websites and 8.5% (n = 21) from educational sources. Table 1. Summarises the allocated categories of origin and viewpoint conveyed. The type of media source along with the details of specific commercial company can be seen in Additional file 1: Appendix 3.1 & 3.2. Com- mercial web pages had the highest proportion of posi- tive viewpoints i.e., 66%, followed by media web pages at 52%, peer-reviewed journals at 37% and educational web pages at 14%. On the other hand, media web pages had the greatest proportion of negative viewpoints at 5%, followed by peer-reviewed journals at 3%. Negative viewpoints were not identified among commercial, edu- cational, or other sources. Peer-reviewed journals had the greatest proportion of balanced viewpoints at 48%, while Page 4 of 11Mulryan et al. Insights into Imaging (2022) 13:79 educational web pages had the greatest proportion of neutral viewpoints at 43%. An identifiable named author was displayed on 93% (n = 230) of web pages, with radiologists responsible for 38.7% (n = 89); journalists represented 20% of authors (n = 46); doctors working in other specialties repre- sented 6.9% (n = 16); and radiographers represented 4.8% (n = 11). Other authors not falling into the aforemen- tioned categories made up the remaining 29.6% (n = 68). Researchers, lawyers, and marketing managers were
  • 53. amongst those in the ‘Other’ category. Web pages authored by journalists had the highest percentage of overall positive viewpoints (52%, n = 24). This was followed by web pages authored by radiolo- gists (46%, n = 41) and radiographers (45%, n = 5). Web pages authored by non-radiologist doctors accounted for the lowest proportion of positive viewpoints (18.8%, n = 3). Four percent of web pages authored by radi- ologists (n = 4) or by those falling into the ‘Other’ cat- egory (n = 3) had negative viewpoints, followed by web pages authored by journalists at 2% (n = 1). There were no negative viewpoints identified in web pages authored by radiographers or non-radiologist doctors. Those authors falling into the category “Other” had the highest proportion of balanced viewpoints at 39.7% (n = 27), while journalists had the greatest proportion of neutral viewpoints at 34.7% (n = 16). See Additional file 1: Appendix 4.1 for tabulated summary (Fig. 2). There were 130 pages authored in North America expressing 60 positive, 48 balanced, 18 neutral and 4 negative pages of content. In Europe (n = 49), there were 21 positive, 17 balanced, 9 neutral and 9 negative pages authored. The United Kingdom had the great- est number of European authored pages, and these expressed 9 positive, 10 balanced, 4 neutral and 0 nega- tive opinions (n = 23). The distribution of the remain- ing pages from Europe was as follows: Netherlands —6, Germany—6, Italy—8, Ireland—4, Belgium—5, Nor- way—1, Denmark—1, Switzerland—5, Austria—1, Cyprus—3, Europe not specified—9. Finally, a mis- cellaneous group including: Australia—11; Israel—4; Asia—12; South America—2, Africa—2; and Not avail- able—14, expressed 19 positive, 18 balanced, 7 neu-
  • 54. tral and 2 negative opinions in the pages that were authored. This frequency data is presented in Table 2 with corresponding percentages in Fig. 3. Radiologists in North America (n = 42) authored 19 positive, 18 balanced, 3 neutral and 2 negative view - points. In Europe, radiologists (n = 31) authored 14 positive, 12 balanced, 3 neutral and 2 negative view - points. UK radiologists authored four pages expressing Total Data Points (n=680) Posi�ve (n=106) Balanced (n=95) Neutral (n=38) Nega�ve (n=8) Non Relevant (n=113) Inaccessible (n=6) Duplicates (n=313) Relevant and accessible (n=248) Fig. 1 Schematic of google search and results summary Table 1 Summary of categorization of posts by origin with percentage n = 248 Journal Media Commercial Education n = 120 48.39% n = 75 30.20% n = 32 12.90% n = 21 8.47% Positive 44 36.67% 39 52.00% 21 65.63% 3 14.29% Negative 4 3.33% 4 5.33% 0 0.00% 0 0.00%
  • 55. Balanced 58 48.33% 24 32.00% 4 12.50% 9 42.86% Neutral 14 11.67% 8 10.67% 7 21.88% 9 42.86% Page 5 of 11Mulryan et al. Insights into Imaging (2022) 13:79 two positive and two balanced perspectives. These data are presented in Table 3 and Fig. 4. The Google Alerts RSS feed identified 5504 new posts over the 3-week period from 34 search terms. Of the alerts identified, 177 were deemed relevant and acces- sible. Sixty-five percent (n = 115) of the posts expressed an overall positive viewpoint; 11% (n = 20) a balanced viewpoint; 23% (n = 40) a neutral viewpoint; and 1% (n = 2) an overall negative viewpoint towards the poten- tial impact of AI on radiology (Fig. 5). Of the relevant posts, the majority were of media ori- gin (86%, n = 152); peer-reviewed journals accounted for 8% (n = 14); 4% (n = 7) were from commercial websites; and 2.3% (n = 4) were from other sources. Commercial 41 24 3 5 26 4
  • 57. 60 70 80 90 100 Radiologist Journalist Non-Radiologist Doctor Radiographers Other Posi�ve Nega�ve Balanced Neutral Fig. 2 Number of overall viewpoints presented by each author group. N = 230 Page 6 of 11Mulryan et al. Insights into Imaging (2022) 13:79 webpages had the highest percentage of overall positive viewpoints (85.7%, n = 6). This was followed by media webpages (67%, n = 102), peer-reviewed journals (35.7%, n = 5), and webpages that fell under the category ‘other’ (25%, n = 1). Forums, educational webpages, and blogs composed the ‘other’ category. Peer-reviewed jour- nals had the greatest percentage of balanced viewpoints (21.4%, n = 3), followed by those that fell under the cat- egory ‘other’ (25%, n = 1). One (7%) article from a peer - reviewed journal had an overall negative viewpoint, as did one (0.66%) of the media webpages. No negative
  • 58. viewpoints were identified in the commercial category. See Table 4 for summary. An identifiable named author was present on 85% (n = 151) of the relevant webpages identified by the Google Alerts RSS feed. The majority of listed authors were journalists (66%, n = 100). This was followed by commercial authors (12.6%, n = 19), radiologists (4%, n = 6), researchers 4% (n = 6), and doctors working in other specialties 3.3% (n = 5). Other authors not falling into the categories represented 9.9% (n = 15) of the con- tributors. This is illustrated in Fig. 4. Marketing man- agers, media editors, and students were amongst those that made up the ‘other’ category. Webpages with a commercial author had the highest percentage of over- all positive viewpoints 84% (n = 16). This was followed Table 2 Geographical origin of viewpoints Number Positive Negative Neutral Balanced Geographical origin North America 130 60 4 18 48 Europe 49 21 2 9 17 UK 23 9 0 4 10 Other 46 19 2 7 18 46% 3% 14%
  • 60. 25% 30% 35% 40% 45% 50% Posi�ve Nega�ve Neutral Balanced North America Europe UK Other Fig. 3 Geographical origin of viewpoint (percentage) Table 3 Geographical origin of radiologist and viewpoint Origin Number Positive Negative Neutral Balanced North America 42 19 2 3 18 Europe 31 14 2 3 12 UK 4 2 0 0 2 Other 12 3 1 3 5 Page 7 of 11Mulryan et al. Insights into Imaging (2022) 13:79 by webpages authored by journalists 64% (n = 64); non- radiologist doctors 60% (n = 3); ‘other’ authors 53%
  • 61. (n = 8); and radiologists 50% (n = 3). Researchers had the greatest percentage of balanced viewpoints 67% (n = 4), while radiologists had the greatest percent- age of neutral viewpoints 33% (n = 2). One webpage authored by a journalist (1%) and one authored by an author in the ‘other’ category (7%) had overall negative viewpoints. This data summarized and tabulated can be seen in Additional file 1: Appendix 4.2 (Fig. 6). 45% 5% 7% 43%45% 6% 10% 39% 50% 0% 0% 50% 25% 8% 25% 42% 0%
  • 62. 10% 20% 30% 40% 50% 60% Posi�ve Nega�ve Neutral Balanced Percentage North America Europe UK Other Fig. 4 Geographical origin and radiologist viewpoint percentage Total Data Points (n=5,504) Posi ve (n=115) Balanced (n=20) Neutral (n=40) Nega ve (n=2) Non Relevant (n=5,069) Duplicates (n=258) Relevant & Accessible (n=177) Fig. 5 Schematic of live Google Alert RSS feed and results summary
  • 63. Table 4 Summary of categorization of posts by origin w ith percentage n = 177 Journal Media Commercial Other 14 7.91% 152 85.88% 7 3.95% 4 2.26% Positive 5 35.71% 102 67.11% 6 85.71% 1 25.00% Negative 1 7.14% 1 0.66% 0 0.00% 1 25.00% Balanced 3 21.43% 13 8.55% 0 0.00% 1 25.00% Neutral 5 35.71% 36 23.68% 1 14.29% 1 25.00% Page 8 of 11Mulryan et al. Insights into Imaging (2022) 13:79 Discussion Opinions and forecasts concerning the role and impact of AI on medical imaging have exploded in last number of years primarily due to recent advancements in AI prod- ucts for radiology. These viewpoints can be positive, neg- ative, balanced, or neutral in their content. AI in medical imaging was first mentioned in the literature in the 1950’s and has evolved substantially since the early 2000’s with the advent of machine learning (ML) and deep learning (DL) algorithms [19]. The number of AI exhibitors at the annual meeting of the Radiological Society of North America (RSNA) and the European Congress of Radiol- ogy (ECR) has tripled from 2017 to 2019 [20, 21]. Since 2016, the US Food and Drugs administration (FDA) has approved 64 AI ML-based medical imaging technologies
  • 64. with 21 of these specializing in the field of Radiology [22]. In Europe, 240 AI/ML devices have been approved over the 2015–2020 period by the Conformité Européene (CE) with 53% for use in radiology [23]. In 2019, The European Society of Radiology published a white paper to provide the radiology community with information on AI and a further study by the ESR demonstrated that there is a demand amongst the radiological community to inte- grate AI education into radiology curricula and training programs including issues related to ethics legislation and data management [24]. The aim of the present paper was use internet activity to determine current opinion on whether AI is a threat or opportunity to the field as this will have impact on recruitment and resource allocation to radiology. 64 3 16 1 3 8 1 1 8 1 1 4 2
  • 65. 2 27 2 2 1 4 0 20 40 60 80 100 120 Journalist Radiologist Commercial Researcher Non-radiologist doctor Other Posi�ve Nega�ve Balanced Neutral Fig. 6 Number of overall viewpoints presented by each author group. N = 151
  • 66. Page 9 of 11Mulryan et al. Insights into Imaging (2022) 13:79 We observed that a wide diversity of commentators were engaged dialog pertaining to AI in radiology ranging from those with professional and academic backgrounds to those with individual and organizational interests. While these authors predictably included healthcare professionals, there was also a significant representation from those with media and commercial backgrounds. Opinions on AI in radiology were therefore gathered from authors with a wide variety of occupations and backgrounds including radiologists, non-radiology physi- cians, journalists, researchers, radiographers, commercial managers, physicists, lawyers, computer scientist, data officers, engineers’, students, and pharmacists. There was a relatively equal division of authorship between North America and Europe. This distribution was also dem- onstrated among radiologist authored pages included in this study. This professional and geographic diversity of authors provides a more complete and international sam- ple of opinions on the impact of AI on radiology. Radiologists repeatedly expressed the opinion that inclusion of AI algorithms could help with labour inten- sive tasks, improve efficiency and workflow. They also opined against the potential of AI replacing radiologists. Numerous studies in the literature also argued against AI replacing radiologists [25, 26]. An example of two com- ments made by radiologists included: The higher efficiency provided by AI will allow radi- ologists to perform more value-added tasks, becom- ing more visible to patients and playing a vital role
  • 67. in multidisciplinary clinical teams And Radiologists, the physicians who were on the fore- front of the digital era in medicine, can now guide the introduction of AI in healthcare - The time to work for and with AI in radiology is now Radiographers expressed the opinion that utilizing AI algorithms could: ultimately lead to a reduction in the radiation expo- sure while maintaining the high quality of medical images and that radiographers would be vital in building qual- ity imaging biobanks for AI data bases. Interestingly, radiographers also wrote that AI should be integrated into the medical radiation practice curriculum and there should be more emphasis on radiomics. Furthermore, radiographers expressed the belief that emotional intel - ligence not artificial intelligence is the cornerstone of all patient care and while the concept of ‘will a robot take my job’ may be a hot topic, they believe that patient’s will not accept their radiographs being taken by a robotic device. This study identified a total of ten negative viewpoints which included comments from radiologists—5, a law- yer—1, a journalist—1 and a neuroscience Ph.D. stu- dent—1. Examples include: In the long-term future, I think that computers will take over the work of image interpretation from humans, just as computers or machines have taken over so many tasks in our lives. The question is, how
  • 68. quickly will this happen? And Radiologists know that supporting research into AI and advocating for its adoption in clinical settings could diminish their employment opportunities and reduce respect for their profession. This provides an incentive to oppose AI in various ways And An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist and better yet, the program can do it faster and more efficiently And A.I. is replacing doctors in fields such as interpreting X-rays and scans, performing diagnoses of patients’ symptoms, in what can be described as a ‘consulting physician’ basis A recent editorial in the Radiological Society of North America (RSNA) highlighted a number of high-profile negative viewpoints made a number of years ago relating to the impact of AI on radiologists [27]. This included an AI pioneer who was recently awarded the Association for Computing Machinery Turing Award, “We should stop training radiologists now” [27]. Secondly a venture cap- italist, Vinod Khsla proclaimed in 2017 ‘that the role of the radiologist will be obsolete in 5 years’ and replaced with ‘sophisticated algorithms’ [28] and furthermore an American ‘Affordable Care’ architect remarked at the 2016 American College of Radiology Annual meeting
  • 69. that radiologists will be replaced by computer technology in 4–5 years [29] and that ‘in a few years there may be no specialty called radiology’[30]. Interestingly, many of the opinions regarding time- frames during which AI were predicted to replace radi- ologists have already expired with a relatively minor uptake of AI in imaging interpretation and without signs of AI replacing radiologists at present. These controver - sial viewpoints have potential to grab headlines but are not without potential for negative impact on the future of radiology and particularly on recruitment of future radiologists, given that studies have shown that medical Page 10 of 11Mulryan et al. Insights into Imaging (2022) 13:79 students are less likely to consider pursuing a career in radiology because of the apparent threat of AI to the spe- cialty [8–10, 31]. This study found that the overwhelming majority of web pages assessed had favourable viewpoints with very few negative viewpoints identified. This finding is con- sistent with a recent social media-based study showing that discussions around AI and radiology were astound- ingly positive, with an increasing frequency of positive discussions identified over a 1-year period [32]. Taken together, these findings suggest a shift in opinion from a once negative view to a more positive one. Of the webpages identified using the Google search engine, Radiologists were found to be the most common author group, making up 38.5% of all identifiable authors.
  • 70. These webpages were predominantly peer-reviewed jour- nal papers and media articles. These findings highlight that radiologists are actively involved in both AI-related research and online discussions relating to AI and the field of radiology. Radiologists have been encouraged to play an active role in the development of applications of AI in medical imaging to ensure appropriate implementation and validation of AI in clinical practice [26, 33]. In 2017, The American College of Radiology established The Data Science Institute partly with this purpose in mind [34]. The main limitation of this study was the use of sub- jective assessment to qualify information into positive, negative, neutral, and balanced. This introduces potential for observer bias in determining the overall viewpoint of posts, but it was attempted to minimize this by using two senior radiologists as the assessors. We did not quantita- tively assess readability of posts. We only used one search engine ‘Google’ and limited to just the English language and to the first two pages of each search term, a strategy following previous publications and backed by behavio- ral studies which have indicated that 95% of users choose websites listed on the first page of results, leaving only 5% reviewing results on any subsequent pages. We acknowledge that the list of search terms in Addi- tional file 1: Appendix 1 is not exhaustive and is just a representative sample of the actual terms that may be used when searching for AI in medical imaging but by using a broad range of terms and studying the first two pages of findings that these search results would yield the most relevant information. The RSS feed was used as a surrogate for incident information and may not be wholly representative of information found in social media news feeds, Twitter, and other sites. There is also potential that a single 3-week alert period may be biased by news and
  • 71. media events that occurred during that time. In Conclusion, authors of 43% of all pages evalu- ated expressed the overall opinion that AI would have a positive impact on the radiologist and the radiology department; 38.3% presented a balanced viewpoint; 15.3% presented a neutral viewpoint; and 3.2% pre- sented a negative viewpoint. We have demonstrated that the overall view presented online is a positive one that AI will benefit the specialty. We should be excited and look forward to advancements in this technol- ogy which has the potential to improve accuracy of diagnosis in diagnostic radiology, reduce errors and improve efficiency in dealing with rapidly increasing workloads. Abbreviations AI: Artificial intelligence; CAGR : Compound annual growth rate; CAD: Computer-aided diagnostics; DL: Deep learning; ECR: European congress of radiology; FDA: Food and drug agency; ML: Machine learning; RSS: Rich site summary; RSNA: Radiological society of North America. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s13244- 022- 01209-4. Additional file 1: 34 key search phrases used in both static Google search and Rich Site Summary feed search strategy. Authors’ contributions
  • 72. PM and NNC had full access to all of the data in the study and take responsi- bility for the integrity of the data and accuracy of the data analysis. Concept and design: PM, NNC, DR, CC, PMcL, MMcE, OJO’C, MM. Acquisition, analysis, or interpretation of data: PM, NNC, CC, DR, MM, OJO’C, ATO’M. Drafting of the manuscript: PM, NNC, OJO’C, MM, ATO’M. Critical revision of the manuscript for important intellectual content: PM, NNC, ATO’M, CC, MMcE, MM, OJO’C. Administrative, technical, or material support: ATO’M. Supervision: PMcL, MM, OJO’C. All authors read and approved the final manuscript. Funding No sources of funding were sought or required to carry out this study. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Ethical approval was granted by the institution review board: Clinical Research Ethics Committee of the Cork teaching Hospitals. Consent for publication Not applicable. Competing interests
  • 73. The authors declare that they have no competing interests. Author details 1 Cork University Hospital/Mercy University Hospital, Cork, Ireland. 2 University College Cork, Cork, Ireland. 3 Cork University Hospital, Cork, Ireland. 4 South Infirmary Victoria University Hospital, Cork, Ireland. Received: 12 November 2021 Accepted: 12 March 2022 https://doi.org/10.1186/s13244-022-01209-4 https://doi.org/10.1186/s13244-022-01209-4 Page 11 of 11Mulryan et al. Insights into Imaging (2022) 13:79 References 1. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. The MIT Press, Cambridge 2. Pesapane F, Codari M, Sardanelli F (2018) Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2:35. https:// doi. org/ 10. 1186/ s41747- 018- 0061-6 3. Esteva A, Kuprel B, Novoa RA et al (2017) Dermatologist- level classifica- tion of skin cancer with deep neural networks. Nature 542(7639):115–118.
  • 74. https:// doi. org/ 10. 1038/ natur e21056 4. McDonald RJ, Schwartz KM, Eckel LJ et al (2015) The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Acad Radiol 22(9):1191–1198. https:// doi. org/ 10. 1016/j. acra. 2015. 05. 007 5. Wood L (2021) The worldwide diagnostic imaging industry is expected to reach $48.5 Billion by 2027. BusinessWire A Berkshire Hathaway Company. ResearchandMarkets.com. https:// www. busin esswi re. com/ news/ home/ 20211 20900 5945/ en/ The- World wide- Diagn ostic- Imagi ng- Indus try- is- Expec ted- to- Reach- 48.5- Billi on- by- 2027--- Resea rchAn dMark ets. com#: ~: text= Amid% 20the% 20COV ID% 2D19% 20cri sis,the% 20ana lysis% 20per iod% 202020% 2D2027. 6. Shanafelt TD, Gradishar WJ, Kosty M et al (2014) Burnout and career satisfaction among US oncologists. J Clin Oncol 32(7):678–686. https:// doi. org/ 10. 1200/ JCO. 2013. 51. 8480 7. Shanafelt TD, Balch CM, Bechamps GJ et al (2009) Burnout and career sat- isfaction among American surgeons. Ann Surg 250(3):463–471. https:// doi. org/ 10. 1097/ SLA. 0b013 e3181 ac4dfd
  • 75. 8. PintoDosSantos D, Giese D, Brodehl S et al (2019) Medical students’ attitude towards artificial intelligence: a multicentre survey. Eur Radiol 29(4):1640–1646. https:// doi. org/ 10. 1007/ s00330- 018- 5601-1 9. Gong B, Nugent JP, Guest W et al (2019) Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: a national survey study. Acad Radiol 26(4):566–577 10. Sit C, Srinivasan R, Amlani A et al (2020) Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging 11:14. https:// doi. org/ 10. 1186/ s13244- 019- 0830-7 11. Waymel Q, Badr S, Demondion X, Cotten A, Jacques T (2019) Impact of the rise of artificial intelligence in radiology: what do radiologists think? Diagn Interv Imaging 100(6):327–336. https:// doi. org/ 10. 1016/j. diii. 2019. 03. 015 12. Chen JY, Heller MT (2014) How competitive is the match for radiology residency? Present view and historical perspective. J Am Coll Radiol 11(5):501–506. https:// doi. org/ 10. 1016/j. jacr. 2013. 11. 011 13. GlobalStats (2021) Search engine market share south
  • 76. worldwide 2021– 2022. https:// gs. statc ounter. com/ search- engine- market- share. Accessed 02 Jan 2021 14. Lorigo L, Pan B, Hembrooke H, Joachims T, Granka L, Gay G (2006) The influence of task and gender on search and evaluation behavior using Google. Inf Process Manag 42(4):1123–1131 15. Spink A, Jansen BJ, Blakely C, Koshman S (2006) A study of results overlap and uniqueness among major web search engines. Inf Process Manag 42(5):1379–1391 16. Enge E, Spencer S, Stricchiola J, Fishkin R (2012) The art of SEO: mastering search engine optimization, 2nd edn. O’Reilly Media, Sebastopol 17. Hopkins L (2012) Online reputation management: why the first page of Google matters so much. www. leeho pkins. net/ 2012/ 08/ 30/ online- reput ation- manag ement- why- the- first- page- of- google- matte rs- so- much/. Accessed 06 Feb 2021 18. Chuklin A, Serdyukov P, De Rijke M (2013) Modeling clicks beyond the first result page. In: Proceedings of international conference on informa- tion and knowledge management, pp 1217–1220
  • 77. 19. Kaul V, Enslin S, Gross SA (2020) History of artificial intelligence in medi- cine. Gastrointest Endosc 92(4):807–812 20. Radiological Society of North America (2017) AI exhibitors RSNA 2017. Radiological society of North America. http:// rsna2 017. rsna. org/ exhib itor/? action= add& filter= Misc& value= Machi ne- Learn ing. Accessed 21. Radiological Society of North America (2019) AI exhibitors RSNA 2019. Radiological society of North America. https://rsna2019.mapyourshow. com/8_0/explore/pavilions.cfm#/show/cat- pavilion|AI%20Showcase. Accessed 22. Benjamens S, Dhunnoo P, Meskó B (2020) The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 3:118. https:// doi. org/ 10. 1038/ s41746- 020- 00324-0 23. Muehlematter UJ, Daniore P, Vokinger KN (2021) Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. Lancet Digit Health 3(3):e195– e203. https:// doi. org/ 10. 1016/ S2589- 7500(20) 30292-2 24. Codari M, Melazzini L, Morozov SP et al (2019) Impact of
  • 78. artificial intelli- gence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insights Imaging 10:105. https:// doi. org/ 10. 1186/ s13244- 019- 0798-3 25. Recht M, Bryan RN (2017) Artificial intelligence: threat or boon to radiolo- gists? J Am Coll Radiol 14:1476–1480 26. King BF (2018) Artificial intelligence and radiology: what will the future hold? J Am Coll Radiol 15(3, Part B):501–503 27. Langlotz CP (2019) Will artificial intelligence replace radiologists? Radiol Artif Intell. 1(3):e190058 28. Farr C (2020) Here’s why one tech investor thinks some doctors will be ‘obsolete’ in five years. CNBC 2017. https:// www. cnbc. com/ 2017/ 04/ 07/ vinod- khosla- radio logis ts- obsol ete- five- years. html. Accessed 4 Feb 2020 29. Siegel E (2020) Will radiologists be replaced by computers? Debunking the hype of AI. Carestream 2016. https:// www. cares tream. com/ blog/ 2016/ 11/ 01/ debat ing- radio logis ts- repla ced- by- compu ters/. Accessed 4 Feb 2020 30. Chockley K, Emanuel E (2016) The end of radiology? Three threats to the
  • 79. future practice of radiology. J Am Coll Radiol 13:1415–1420. https:// doi. org/ 10. 1016/j. jacr. 2016. 07. 010 31. Bin Dahmash A, Alabdulkareem M, Alfutais A et al (2020) Artificial intel- ligence in radiology: does it impact medical students preference for radiology as their future career? BJR Open 2(1):20200037 32. Goldberg JE, Rosenkrantz AB (2019) Artificial intelligence and radiology: a social media perspective. Curr Probl Diagn Radiol 48(4):308– 311 33. Dreyer K, Allen B (2018) Artificial intelligence in health care: brave new world or golden opportunity? J Am Coll Radiol 15(4):655–657 34. McGinty GB, Allen B (2018) The ACR data science institute and AI advisory group: harnessing the power of artificial intelligence to improve patient care. J Am Coll Radiol 15(3, Part B):577–579 Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in pub- lished maps and institutional affiliations. https://doi.org/10.1186/s41747-018-0061-6 https://doi.org/10.1186/s41747-018-0061-6 https://doi.org/10.1038/nature21056 https://doi.org/10.1016/j.acra.2015.05.007 https://doi.org/10.1016/j.acra.2015.05.007 https://www.businesswire.com/news/home/20211209005945/en/
  • 80. The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to- Reach-48.5-Billion-by-2027--- ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2 D19%20crisis,the%20analysis%20period%202020%2D2027 https://www.businesswire.com/news/home/20211209005945/en/ The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to- Reach-48.5-Billion-by-2027--- ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2 D19%20crisis,the%20analysis%20period%202020%2D2027 https://www.businesswire.com/news/home/20211209005945/en/ The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to- Reach-48.5-Billion-by-2027--- ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2 D19%20crisis,the%20analysis%20period%202020%2D2027 https://www.businesswire.com/news/home/20211209005945/en/ The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to- Reach-48.5-Billion-by-2027--- ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2 D19%20crisis,the%20analysis%20period%202020%2D2027 https://www.businesswire.com/news/home/20211209005945/en/ The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to- Reach-48.5-Billion-by-2027--- ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2 D19%20crisis,the%20analysis%20period%202020%2D2027 https://doi.org/10.1200/JCO.2013.51.8480 https://doi.org/10.1200/JCO.2013.51.8480 https://doi.org/10.1097/SLA.0b013e3181ac4dfd https://doi.org/10.1097/SLA.0b013e3181ac4dfd https://doi.org/10.1007/s00330-018-5601-1 https://doi.org/10.1186/s13244-019-0830-7 https://doi.org/10.1186/s13244-019-0830-7 https://doi.org/10.1016/j.diii.2019.03.015 https://doi.org/10.1016/j.diii.2019.03.015 https://doi.org/10.1016/j.jacr.2013.11.011 https://gs.statcounter.com/search-engine-market-share http://www.leehopkins.net/2012/08/30/online-reputation-
  • 81. management-why-the-first-page-of-google-matters-so-much/ http://www.leehopkins.net/2012/08/30/online-reputation- management-why-the-first-page-of-google-matters-so-much/ http://rsna2017.rsna.org/exhibitor/?action=add&filter=Misc&val ue=Machine-Learning http://rsna2017.rsna.org/exhibitor/?action=add&filter=Misc&val ue=Machine-Learning https://doi.org/10.1038/s41746-020-00324-0 https://doi.org/10.1038/s41746-020-00324-0 https://doi.org/10.1016/S2589-7500(20)30292-2 https://doi.org/10.1186/s13244-019-0798-3 https://doi.org/10.1186/s13244-019-0798-3 https://www.cnbc.com/2017/04/07/vinod-khosla-radiologists- obsolete-five-years.html https://www.cnbc.com/2017/04/07/vinod-khosla-radiologists- obsolete-five-years.html https://www.carestream.com/blog/2016/11/01/debating- radiologists-replaced-by-computers/ https://www.carestream.com/blog/2016/11/01/debating- radiologists-replaced-by-computers/ https://doi.org/10.1016/j.jacr.2016.07.010 https://doi.org/10.1016/j.jacr.2016.07.010 © The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/(the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. An evaluation of information online on artificial intelligence in medical imagingAbstract Background: Methods: Results: Conclusion: KeypointsBackgroundMethodsData collectionData identificationExisting dataLive streamData sourcingData categorizationRelevancePosi tiveNegativeBalanced
  • 82. and neutralData analysisResultsDiscussionReferences European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 https://doi.org/10.1186/s13244-022-01247-y S TAT E M E N T Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology European Society of Radiology (ESR)* Abstract A survey among the members of European Society of Radiology (ESR) was conducted regarding the current practi- cal clinical experience of radiologists with Artificial Intelligence (AI)-powered tools. 690 radiologists completed the survey. Among these were 276 radiologists from 229 institutions in 32 countries who had practical clinical experience with an AI-based algorithm and formed the basis of this study. The respondents with clinical AI experience included 143 radiologists (52%) from academic institutions, 102 radiologists (37%) from regional hospitals, and 31 radiologists (11%) from private practice. The use case scenarios of the AI algorithm were mainly related to diagnostic interpreta- tion, image post-processing, and prioritisation of workflow. Technical difficulties with integration of AI-based tools into the workflow were experienced by only 49 respondents (17.8%). Of 185 radiologists who used AI-based algorithms for diagnostic purposes, 140 (75.7%) considered the results of the algorithms generally reliable. The use of a diagnos- tic algorithm was mentioned in the report by 64 respondents
  • 83. (34.6%) and disclosed to patients by 32 (17.3%). Only 42 (22.7%) experienced a significant reduction of their workload, whereas 129 (69.8%) found that there was no such effect. Of 111 respondents who used AI-based algorithms for clinical workflow prioritisation, 26 (23.4%) considered algorithms to be very helpful for reducing the workload of the medical staff whereas the others found them only moderately helpful (62.2%) or not helpful at all (14.4%). Only 92 (13.3%) of the total 690 respondents indicated that they had intentions to acquire AI tools. In summary, although the assistance of AI algorithms was found to be reliable for different use case scenarios, the majority of radiologists experienced no reduction of practical clinical workload. Keywords: Professional issues, Artificial intelligence in imaging, Artificial intelligence and workload, Artificial intelligence in radiology © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
  • 84. licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Key points • Artificial Intelligence (AI) algorithms are being used for a large spectrum of use case scenarios in clinical radiology in Europe, including assistance with inter- pretive tasks, image post-processing, and prioritisa- tion in the workflow. • Most users considered AI algorithms generally reli- able and experienced no major problems with techni- cal integration in their daily practice. • Only a minority of users experienced a reduction of the workload of the radiological medical staff due to the AI algorithms. Background and objectives Digital imaging is naturally predisposed to benefit from the rapid and exciting progress in data science. The increase of imaging examinations and the associated diagnostic data volume have resulted in a mismatch Open Access Insights into Imaging *Correspondence: [email protected] European Society of Radiology (ESR), Am Gestade 1, 1010 Vienna, Austria http://creativecommons.org/licenses/by/4.0/ http://crossmark.crossref.org/dialog/?doi=10.1186/s13244-022- 01247-y&domain=pdf
  • 85. Page 2 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 between the radiologic workforce and workload in many European countries. In an opinion survey conducted in 2018 among the members of the European Society of Radiology (ESR), many respondents had expectations that algorithms based on artificial intelligence (AI) and particularly machine learning could reduce radiologists’ workload [1]. Although a growing number of AI-based algorithms has become available for many radiological use case scenarios, most published studies indicate that only very few of these tools are helpful for reducing radi - ologists’ workload, whereas the majority rather result in an increased or unchanged workload [2]. Furthermore, in a recent analysis of the literature it was found that the available scientific evidence of the clinical efficacy of 100 commercially available CE-marked products was quite limited, leading to the conclusion that AI in radiology was still in its infancy [3]. The purpose of the present sur- vey was to get an impression of the current practical clin- ical experience of radiologists from different European countries with AI-powered tools. Methods A survey was created by the members of the ESR eHealth and Informatics Subcommittee and was intentionally kept brief to allow responding in a few minutes. A few demographic questions included the country, type of institution (i.e. academic department, regional hospital, or private practice), and the main field of radiological practice as summarised in Tables 1, 2 and 3. For the more specific questions about the use of AI-based algorithms it was clearly stated that the answers were intended to reflect experience from clinical routine rather than
  • 86. research and testing purposes. The questions related to the use of AI addressed the respondents’ working expe- rience with certified AI-based algorithms, possible diffi- culties in integrating these algorithms in the IT system, and different use case scenarios for which AI-based algo- rithms were used in clinical routine, mainly distinguish- ing between tools aiming at facilitating the diagnostic interpretation process itself (questions shown in Fig. 1) from those that were aiming at facilitating the prioritisa- tion of examinations in the workflow. Specific questions addressed the technical integration of the algorithms (question mentioned in Table 4); radiologists’ confidence in the diagnostic performance (question mentioned in Table 5); quality control mechanisms to evaluate diag- nostic accuracy (questions mentioned in Tables 6, 7 and 8); communication of the use of diagnosis-related algo- rithms towards patients or in the radiology reports (ques- tions mentioned in Tables 9 and 10); and the usefulness of algorithms for reducing the radiologists’ workload (ques- tions mentioned in Tables 11 and 12). Respondents also had the opportunity to offer free text remarks regarding their use of AI-based tools. Those respondents who did not use AI-based algorithms for the purpose of clinical practice were asked to skip all the questions related to clinical AI-use and to proceed directly to the last ques- tion about acquisition of AI-based algorithms, so that the opinions of all participating radiologists were taken into consideration for the final questions about their inten- tions regarding acquisition of such tools (question men- tioned in Fig. 2). The survey was created through the ESR central office using the “Survey Monkey platform” (SurveyMonkey Inc., San Mateo, CA, USA) and 27,700 radiologist mem- bers of the ESR were invited by e-mail to participate
  • 87. in January 2022. The survey was closed after a second reminder in March 2022. The answers of the respond- ents were collected and analysed using an Excel software (Microsoft, Redmond, WA, USA). Results A total of 690 ESR radiologist members from 44 coun- tries responded to the survey, for a response rate of 2.5%. The distribution per country and the proportion of respondents with practical clinical experience with AI- based algorithms per country are given in Table 1. The 276 respondents with practical clinical experi- ence with AI-based algorithms were affiliated to 229 institutions in 32 countries; their answers formed the main basis of this study. Table 2 shows that 143 (52%) of the respondents with practical clinical experience with AI algorithms were affiliated to academic institutions, whereas 102 (37%) worked in regional hospitals, and 31 (11%) in private practice. Table 3 characterises the same group of respondents as in Table 2 regarding their main field of activity show- ing that a wide range of subspecialties was represented in the survey and that abdominal radiology, neuroradiol- ogy, general radiology, and emergency radiology together accounted for half of the respondents. A detailed analysis of the results according to subspecialties was beyond the scope of the study because of the relatively small number of resulting groups. The experience regarding technical integration of the software algorithms into the IT system or work- flow is summarised in Table 4, showing that only 17.8% of respondents reported difficulties with integration of these tools, whereas a majority of 44.5% observed no
  • 88. such difficulties, although 37.7% of respondents did not answer this question. Algorithms were used in clinical practice either for assistance in interpretation or for prioritisation of work- flow. An overview of the scenarios for which AI- powered algorithms were the used by the respondents is given in Fig. 1. Page 3 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 Table 1 Distribution of all 690 respondents by countries and proportion of radiologists with practical clinical experience with AI algorithms Country Number of respondents per country Number of respondents with practical clinical experience with AI per country Percentage of radiologists with practical clinical experience in AI per country (%) Italy 71 23 32 Spain 64 19 30 UK 60 23 38 Germany 50 23 46
  • 89. Netherlands 50 35 70 Sweden 29 14 48 Denmark 27 15 56 Turkey 27 3 11 Norway 26 12 46 Switzerland 27 14 54 France 25 12 48 Belgium 23 13 57 Austria 21 12 57 Greece 21 5 24 Portugal 17 5 29 Romania 16 4 25 Ukraine 13 3 23 Croatia 11 4 36 Russian Fed 11 4 36 Bulgaria 10 0 0 Poland 10 4 40 Finland 7 4 57
  • 90. Hungary 7 3 43 Serbia 7 1 14 Slovenia 7 3 43 Slovakia 6 5 83 Ireland 5 2 40 Lithuania 5 2 40 Bos. & Herzegovina 4 0 0 Czech Republic 4 3 75 Israel 4 2 50 Latvia 4 0 0 Armenia 3 0 0 Albania 2 0 0 Azerbaijan 2 0 0 Belarus 2 0 0 Estonia 2 2 100 Georgia 2 0 0 Kazakhstan 2 0 0 Luxembourg 2 1 50
  • 91. Cyprus 1 0 0 Iceland 1 0 0 Kosovo 1 1 100 Uzbekistan 1 0 0 Total 690 276 Page 4 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 Use of algorithms for assistance in diagnostic interpretation Among the 276 respondents who shared their practi- cal experience with AI-based tool experience, a total of 185 (67%) reported clinical experience with one or more integrated algorithms for routine diagnostic tasks. As seen in Fig. 1 there were different use case scenarios, the commonest being detection or mark- ing of specific findings. The free text remarks of the respondents showed a large range of pathologies in practically all clinical fields and with almost all imag- ing modalities. Typical examples of pathologies were pulmonary emboli and parenchymal nodules, cerebral haemorrhage and reduced cerebrovascular blood flow, or colonic polyps on CT. Other tasks included the detection of traumatic lesions, e.g. the presence of bone fractures on conventional radiographs or the calcula- tion of bone age. The second most common diagnostic scenario was assistance with post-processing (e.g. using AI-based tools for image reconstruction or quantita-
  • 92. tive evaluation of structural or functional abnormali- ties), followed by primary interpretation (i.e. potentially replacing the radiologist), assistance with differential Table 2 Respondents with practical clinical experience with AI- based algorithms: distribution of origin by countries and type of institutions Country Number of respondents per country Number of institutions per country Respondents from academic departments Respondents from private practice Respondents from regional hospitals Netherlands 35 20 16 0 19 Germany 23 21 14 3 6 Italy 23 21 13 0 10 UK 23 22 7 2 14 Spain 19 16 14 1 4 Denmark 15 7 11 1 3
  • 93. Switzerland 14 13 6 6 2 Sweden 14 14 7 1 6 Belgium 13 9 5 1 7 Austria 12 11 7 1 4 France 12 11 5 5 2 Norway 12 9 6 0 6 Greece 5 5 2 2 1 Portugal 5 4 0 4 1 Slovakia 5 5 2 2 1 Croatia 4 4 1 1 2 Finland 4 3 3 0 1 Poland 4 3 3 0 1 Romania 4 2 2 0 2 Russian Fed 4 4 3 0 1 Czech Republic 3 3 1 0 2 Hungary 3 3 2 0 1 Slovenia 3 3 2 0 1 Turkey 3 3 3 0 0
  • 94. Ukraine 3 2 2 1 0 Estonia 2 2 1 0 1 Ireland 2 2 1 0 1 Israel 2 2 2 0 0 Lithuania 2 2 0 0 2 Kosovo 1 1 1 0 0 Luxembourg 1 1 0 0 1 Serbia 1 1 1 0 0 Total 276 229 143 (52%) 31 (11%) 102 (37%) Page 5 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 diagnosis, e.g. by facilitation of literature search, and quality control. Although a detailed analysis of all different diagnostic use case scenarios was beyond the scope of this survey, the respondents’ answers to specific survey questions are shown in Tables 5, 6, 7, 8, 9, 10 and 11. Because some respondents skipped or incompletely answered some questions, the number of yes/no answers per question was not complete. As shown in Table 5, most respond- ents (75.7%) found the results provided by the algorithms generally reliable.
  • 95. A significant number of respondents declared that they used mechanisms of quality assurance regarding the diagnostic performance of the algorithms. These included keeping records of diagnostic discrepancies between the radiologist and the algorithms in 44.4%, establishing receiver-operator characteristic (ROC) curves of diagnostic accuracy based on the radiologist’s diagnosis (34.1%) and/ or ROC curves based on the final medical record (30.3%) (Tables 6, 7 and 8). The use of a diagnostic algorithm was disclosed to patients by 17.3% of the respondents but mentioned in the report by 34.6% (Tables 9 and 10). Only a minority of 22.7% of respondents who used AI- based algorithms for diagnostic purposes experienced a reduction of their workload, whereas 69.8% reported Table 3 Respondents with practical clinical experience with AI-based algorithms: main field of activity/subspecialty Field of practice Number of respondents (%) Abdominal radiology 45 16.3 Neuroradiology 45 16.3 General radiology 39 14.1 Chest radiology 32 11.6 Cardiovascular radiology 24 8.7 Musculoskeletal radiology 23 8.3
  • 96. Oncologic imaging 23 8.3 Breast radiology 17 6.2 Emergency radiology 10 3.6 Paediatric radiology 8 2.9 Urogenital radiology 6 2.2 Head and Neck radiology 4 1.5 Total 276 100 111 (40%) 11 (4%) 14 (5%) 19 (7%) 79 (28.6%) 142 (51.5%) 0 50 100 150 Workflow Priori sa on Quality control Assistance during interpreta on (e.g., access to literature, facilita ng differen al diagnosis etc.)
  • 97. Primary interpreta on (=replacing the radiologist) Assistance for post-processing (e.g., image reconstruc on, quan ta ve evalua on) Assistance during interpreta on (e.g., detec ng / marking of specific findings like nodules, emboli etc.) Fig. 1 Which type of scenario (use case) was addressed by the used AI algorithm(s) in clinical routine? The answers of all 276 respondents with practical clinical AI experience are shown, including the number of respondents using one or more algorithms for assistance in diagnostic interpretation (green) and/ or workflow prioritisation (blue) Table 4 Respondents with practical clinical experience with AI-based algorithms: Have there been any major problems with integration of AI-based algorithms into your IT system/workflow? Answer Number of respondents (%) Yes 49 17.8 No 123 44.5 Skipped 104 37.7 Total 276 100 Table 5 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Were the findings of the algorithm(s) considered to be reliable? Answer Number of respondents (%)
  • 98. Yes 140 75.7 No 31 16.8 Skipped 14 7.5 Total 185 100 Page 6 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 that there was no reduction effect on their workload (Table 11). Use of algorithms for prioritisation of workflow Among the 276 respondents who had practical expe- rience with AI-based tools, there were 111 respond- ents (40%) reporting experience with algorithms for prioritisation of image sets in their clinical workflow. As shown in Table 12, the prioritisation algorithms were considered to be very helpful for reducing the workload of the medical staff by 23.4% respondents who used them, whereas the other users found them only moderately helpful (62.2%) or not helpful at all (14.4%). Intentions of all respondents regarding the acquisition of an AI‑ based algorithm All participants of the survey, regardless of their prac- tical clinical experience, were given the opportunity to answer the question whether they intended to acquire a certified AI- based software. Of the 690 participants,
  • 99. 92 (13.3%) answered “yes”, 363 (52.6%) answered “no,” and 235 (34.1%) did not answer this question. Figure 2 Table 6 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Were discrepancies between the software and the radiologist recorded? Answer Number of respondents (%) Yes 82 44.4 No 89 48.1 Skipped 14 7.5 Total 185 100 Table 7 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Was the diagnostic accuracy (ROC curves) supervised on a regular basis in comparison with the radiologist’s diagnosis? Answer Number of respondents (%) Yes 63 34.1 No 108 58.4 Skipped 14 7.5 Total 185 100 Table 8 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Was the
  • 100. diagnostic accuracy (ROC curves) supervised on a regular basis in comparison with the final diagnosis in the medical record? Answer Number of respondents (%) Yes 56 30.3 No 115 62.2 Skipped 14 7.5 Total 185 100 Table 9 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Were patients informed that an AI software was used to reach the diagnosis? Answer Number of respondents (%) Yes 32 17.3 No 139 75.2 Skipped 14 7.5 Total 185 100 Table 10 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Was the use of an AI software to reach the diagnosis mentioned in the report? Answer Number of respondents (%)
  • 101. Yes 64 34.6 No 107 57.9 Skipped 14 7.5 Total 185 100 Table 11 Experience of 185 respondents with AI-based algorithms for clinical diagnostic interpretive tasks: Has (have) the algorithm(s) used for diagnostic assistance proven to be helpful in reducing the workload for the medical staff? Answer Number of respondents (%) Yes 42 22.7 No 129 69.8 Skipped 14 7.5 Total 185 100 Table 12 Experience of 111 respondents with AI-based algorithms for clinical workflow prioritisation: Has the algorithm proven to be helpful in reducing the workload for the medical staff? Answer Number of respondents (%) Not at all helpful 16 14.4 Moderately helpful 69 62.2
  • 102. Very helpful 26 23.4 Total 111 100 Page 7 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107 summarises the reasons given by participants who did not intend to acquire AI-based algorithms for their clinical use. Discussion While the previous survey on AI [1] was based on the expectations of the ESR members regarding the impact of AI on radiology, the present survey intended to obtain an overview of current practical clinical experience with AI-based algorithms. Although the respondents with practical clinical experience in this survey represent only 1% of the ESR membership, their proportion among all respondents varied greatly among countries. The geo- graphical distribution of the 276 radiologists who shared their experience with such tools in clinical practice shows that the majority was affiliated to institutions in West- ern and Central Europe or in Scandinavia. Half of all respondents with practical clinical experience with AI tools was affiliated to academic institutions, whereas the other half practiced radiology in regional hospitals or in private services. Since it is likely that the respondents in this survey were radiologists with a special interest in AI- based algorithms, it cannot be assumed that this survey reflects the true proportion of radiologists in the Euro- pean region with practical clinical experience with AI- based tools.