ARTIFICIAL
INTELLIGENCE IN
PATHOLOGY
PRESENTOR : DR. SAIAKSHAY
MODERATOR : DR. K K SURESH
CONTENTS
• INTRODUCTION
• DIGITAL PATHOLOGY
• ARTIFICIAL INTELLIGENCE (AI)
• ARTIFICIAL INTELLIGENCE IN PATHOLOGY
• APPLICATION OF AI IN PATHOLOGY
• LIMITATIONS
• CONCLUSION
• FUTURE PROSPECTS
• REFERENCES
INTRODUCTION
• Automation in Pathology are being currently employed in automated
tissue processing, staining and generation of reports
• It progressed to introduction of digital pathology which are now under
process of approval for primary diagnosis
• Digital Pathology(DP) allows remote diagnostic reporting with flexible
hours, ensures service delivery even in challenging situations such as
COVID-19 pandemics and facilitates second and expert opinion
reporting
• DP enables implementation of Artificial intelligence based algorithms
in routine practice
• The integration of AI will be the milestone for healthcare in the next
decade
Digital pathology
• It is the practice of pathology using digital imaging for clinical and
nonclinical purpose
• A computer based environment using Digital Pathology System as
opposed to a microscope based one.
• It involves scanning and converting of glass slides into high
resolution digital images.
• Digital slides can then be viewed, analyzed, shared, interpreted and
stored using digital pathology system.
Digital Pathology Ecosystem consist of 3 major components
1. Information system – access to hospital information system,
electronic medical record, laboratory information system and
radiology picture and communication system (PACS)
2. Digital pathology system – 2 connected subsystem – device to
acquire and manage digital image(WSS) and workstation to view
or share images
3. System tools – to analyse and manipulate the digital images eg:
image analysis algorithm
Telepathology
• Telepathology technologies allow light microscopic examination and
diagnosis from a distance.
• These systems may be combined with (Digital learning) DL
algorithms to assist the pathologist to screen a remote case for
consultation or extract features from these images for assisted
diagnosis.
• Telepathology use cases include primary diagnosis, teleconsultation,
intraoperative consultation, telecytology, tele microbiology, quality
assurance,and tele education
• Dynamic systems are remote-controlled microscopes that provide the
pathologist with a “live” view of the distant microscopic image while
allowing them to control the stage, focus, and change the
magnification, remotely
TELEPATHOLOGY
DYNAMIC STATIC HYBRID
• The second method of telepathology is “store-and-forward”
telepathology
• In which digital images are first captured via cameras or acquired via
WSI systems for transmission to the consultant
• These images can be captured in standard file formats and can be
viewed via a remote viewer.
• The third method of telepathology is the use of hybrid systems
• Which can combine features of both “store-and-forward” and dynamic
systems
• Hybrid telepathology systems can provide a “live” view using
robotic-controlled microscopy with high-resolution still image capture
and retrieval. This dual functionality makes them valuable
• Some of the hybrid systems in the market today have capabilities for
WSI as well
Digital images
• DPI - composed of pixels arranged in rows and column
• Source - autopsy, grossing room, cameras mounted on microscopes or
scanners, static images, live streaming and most recently whole slide
imaging
WHOLE SLIDE IMAGING
• @Virtual microscopy
• Process of generating whole slide image using whole slide
scanner
• A whole slide scanner is an automated robotic microscope
capable of digitizing glass slides
• Consist of microscope, robotics, hardware
and software that scan/ digitalize glass slide
to generate whole slide image
• Basic features of WSS - automated barcode reading,
automated tissue identification, autofocus,
autoscanning and automated image compression
• Whole slide scanners use digital cameras with sensors
to capture each FOV using tile-or line-based scanning
methods
• Final whole slide image is produced by stitching of
each FOV which represent a high resolution replica of
the original glass slide
• In special conditions(cytology/blood smears), Z-stacking done.
• Quality of images are affected by pre-imaging steps (tissue prep,
staining), various acquisition & processing parameters (lens
objective numerical aperture, digital camera sensor, compression)
and display feature (monitor resolution)
• Image viewing softwares used to microscopic view of the digital
slides with image navigation, panning and zooming movements
• In addition, annotations, taking measurements, screenshots, rotate
images, display multiple images side by side, run image analysis
algorithms can be done
• In WSI, lower resolution planes are stacked above higher resolution
regions in a pyramidal architecture so that during retrival, WSI uses
multi-resolution plane to load a small region instead of the entire
image
• Examples of WSI viewers - OpenSlide (open-source viewer),
ImageScope and Webscope (Leica), Sedeen (Pathcore), and IntelliSite
(Philips)
Uses of digital images
• Telepathology for consultation
• Multidisciplinary meeting (tumor boards), conferences, presentation
• Documentation, pathology reports
• Archiving
• Quality assurance
• Primary pathology diagnosis
• Remote intra-operative consultation
• Nonclinical – research, education, legal situations
Advantage of Digital images
over conventional slides
• Easy portability
• A tremendous advantage in digital pathology is quick access to archived slides
of one patient for comparison of older slides to the actual slides and also to get
quicker access to second opinions worldwide
• Virtual immunohistochemistry service provided by large national laboratories.
• WSI images provide very rich and complex information about tissue or
disease characteristics
• It provides platform for development of automated or semi-automated
image analysis methods, aiming to increase the diagnostic accuracy,
reliability, reproducibility and efficiency by enabling quantitative image
analysis
• This leads to open doors for application of Artificial
Intelligence in Pathology
What is artificial intelligence
• The term “Artificial Intelligence” is often used to describe the ability of a
computer to perform what appears to be human level cognition
• Rather than thinking, it utilizes information to improve its performance
through learning
Are AI, Machine learning, Deep
learning same?
• AI is a branch of computer science about systems which can learn from data
Example:-
 Google predictive searches
Product recommendations on amazon
Music recommendations by Spotify
Google maps ‘ fastest route’
• Machine learning is a branch of AI based on the idea that computer can learn
from data, as humans learn from experience and make decisions about new
data, without human interactions
• Deep learning in turn is a subset of machine learning that uses artificial
neural networks to classify information similar to human brain
Deep learning
• ML - the programmer determines the feature vectors (based on which
detection happen) and define it through a process called as feature
engineering
• Neural network in DL can abstract out their own features(representations)
from the inputted data all by itself – feature learning
• These representation are formed in the layer between input and output
called as hidden layer
• Thus neural network learns rules by creating hidden variables from data
Artificial neural network
-model within deep learning
Model for deep learning are (very) loosely based
upon the neural network architecture of the
mamalian brain ( visual cortex)
Types of Deep learning
Mainly five types of deep/artificial neural networks currently used for
pathological image analysis:
1) CNNs,
2) FCNs,
3) GANs - have recently elicited increasing interest.
4) SAEs
5) RNNs
dominant models used in digital pathology
adopted in some literature but are not the major
methods for pathological image computing
• COMPUTER VISION
 A field of computer science trying to mimick human vision.
Deep learning algorithms, like CNN’s perform well in computer
vision tasks
• GRAPHIC PROCESSING UNIT(GPU)
 The chip on the computer’s graphics card designed to rapidly process
images, typically found in powerful gaming computers.
 Indispensable for fast processing of WSI.
• PATCHING
 The size of whole slide images is huge (+/-15GB=3*3h long HD
movies on Netflix, they need to be divided in to smaller parts –
patches . This process is called patching.
• DATAAUGMENTATION
 A way to get more input data for training our model when we don’t
have more data by minimally altering structures of interest .
eg- by rotating or flipping mitotic figures.
Application of AI in daily life
AI in Medicine
• AI can mine electronic health records, journal and textbook materials, patient
clinical support systems for epidemiologic patterns, risk factors and
diagnostic predictions
• Signal classification, such as interpretation of ECG and EEG recordings
• Image analysis in Radiology
• Studies on Deep learning shows
Dermatology – detects SCC vs Benign seborrhoeic keratosis with
diagnostic accuracy comparabe with dermatologist
Used in detecting sight threatening retinal diseases using optical coherance
tomography
US FDA approved a deep learning-based autonomous AI diagnostic system
to detect diabetic retinopathy on the images of RETINAL FUNDUS
WHY AI IN PATHOLOGY?
• Precision medicine demands precision diagnostics
• The pathologist’s diagnosis based on histological slides is at the
centre of clinical management especially decisions on cancer care
• This emphasizes the need of accuracy in histopathological
diagnosis for personalized therapy
• Discordance among pathologist is common as seen in assessing
certain subjective features like cytonuclear pleomorphism/degree
of atypia, cellularity, counting mitotic figures, scoring of tumour
infiltrating lymphocytes and assessment of proliferation (Ki67) index.
• The end point of traditional microscopic slide evaluation is qualitative
assessment of changes in cell number, morphology, distribution
• AI is being anticipated to help pathologists in transition from
providing just traditional qualitative (descriptive) reports to more
quantitative results.
• Due to rapid increase in population and decrease in pathology
workforce with even more shortage in sub specialities, there is
requirement of automation to ensure smooth workflow
• With development of AI applications in pathology, they can play a role
of digital assitant to the pathologist
Image Analysis
• Image analysis is the extraction of meaningful information from
images
• Specialized image analysis software platforms are commercially
available (e.g., Visiopharm, Indica Labs, Genie from Leica,
Definiens, Virtuoso from Roche)
Training set presented to program to differentiate
between 2 entities
Program starts making guess
Guesses are refined through multiple known examples
Generation of internal models to finally make
the right guess
Tested on test set/ validation set before using this
model on unknown unlabelled date
• Software algorithms have been developed to identify rare events
(e.g., screening for microorganisms, counting mitoses, and detecting
micro metastases in lymph nodes) and quantify stains (e.g., most
commonly breast biomarkers) and various features (e.g., extent of
tissue fibrosis and degree of liver steatosis).
• They can also analyze spatial patterns and feature distribution better
than humans.
• Image algorithms include several steps such as image preprocessing
(e.g., color normalization) following which DL technologies can be
applied to perform
 Classification,
 Segmentation,
 Image translation,
 image style transfer.
Classification
• Refers to the task of assigning a label to an image.
• Automatic classification of tissue structures and subtypes can also be
extremely useful to augment and improve the histopathology
workflow
• The leading algorithms for image classification are convolutional
neural networks (CNNs),
Segmentation
• Refers to the task of assigning labels to specific regions of an image- can be
seen as a pixel-level classification task.
• Segmentation of subcellular structures can be useful for automating common
tasks - cell enumeration (via nuclei counting), determination of intracellular
locations of molecular markers, analysis of subcellular morphological features
such as nuclear size, eccentricity and chromatin texture
• The most common deep learning architecture for segmentation is U-nets
• Multilabel classification is based on image segmentation into regions
of interest
picture
Mitotic figure detection
Application of segmentation
• Detecting mitoses is a difficult problem because of inter and intra-
observer variability
• Breast tumors are good examples of cases in which proliferation
has prognostic significance
• In 2013, the Assessment of Mitosis Algorithms (AMIDA13) Study
was launched and two-step object detection approach used (first
step identified candidate objects; classified in the second step as
mitoses or non-mitoses)
Image Translation
• Refers to the conversion of one image representation to another image
representation such as converting between night & day images, winter &
summer images, satellite images to maps.
• In computational pathology, it has been explored for stain color normalization
and for converting between different image modalities
• The most common models used for image translation tasks are generative
adversarial networks(GAN) or their variants
Style Transfer
• Style transfer focuses on transferring the style of a single reference image to
the content of the input image in order to generate an output image
• It is also used for stain-color normalization - input image converted to have
the color gamut and structural color assignments that mimic a target,
reference image
Image enhancement
• Image analysis methods find it difficult to cope with variability between
individual stained whole-slide (WSI) images due to different scanners,
staining procedures, and institutional environments
• Algorithms intended to reduce variability between and within microscopy
image datasets developed, collectively designated as stain color
normalization methods
• It includes color deconvolution, clustering based algorithm and recently,
deep leaning algorithm
Stain color normalisation
Mode switching
• Novel image modalities can be difficult to interpret.
Quantitative phase imaging - lower phototoxicity and portability
Confocal laser endomicroscopy(CLE) - allows for in vivo real-time analysis
of tissue
• Deep neural networks can be used to convert these images to H&E-
like images for better interpretation.
• A GAN was trained to convert the QPI(Quantitative phase imaging)
images to digital H&E stained images.
• Neural style transfer was used to digitally stain CLE images to H&E
.
In silico labeling
• The process in which stain-free brightfield images reconstructed to
molecularly stained images, highlighting fluorescently labelled
cellular structures
• Rana et al used cGANs to computationally stain images of
unstained paraffin-embedded WSIs.
Extended depth-of-field
• High-quality WSIs needs high-numerical aperture lenses to capture
high-resolution images but with restricted fields of view, increasing the
scanning time, as well as shallow depths of field
• An example of the use of deep learning to increase depth of field is an
implementation of a CNN to greatly increase the speed at which
automated scanning and diagnosis of malaria could be made
DENOISING
• Another contribution of AI that can markedly improve image quality is reduction
of noise.
• Images with low signals can be difficult to interpret esp. in vivo imaging
challenging
• U-net architecture was successfully utilized for image denoising of
fluorescent microscopy images
APPROACHES TO RAPID
HISTOLOGY
INTERPRETATIONS
• Histological data is at the center of decision-making in tumor surgery
esp. in brain
• Relies on frozen section evaluation but time(30-45 min from time of
reception) and labor intensive to prepare the histological slide and
logistics barrier incase of off-site lab
• Slide-free histology have been proposed and validated to varying
degrees.
• The most promising tools for imaging unprocessed or minimally
processed biological specimens
• Microscopy with ultraviolet surface excitation(MUSE),
• Stimulated Raman histology (SRH)
• Lightsheet microscopy
• Only SRH has been incorporated into an AI-based diagnostic pipeline
validated for brain tumor diagnosis in a clinical setting.
• Hollon et al. adapted the Inception-V2 CNN to diagnose the 10 most
common brain tumors using SRH images and deployed their software
on a bedside SRH imager. In a head-to-head clinical trial of 278
patients,
• Study by Hollon et al. concluded that autonomous diagnostic pathway
utilizing SRH and a CNN could predict diagnosis which was non-inferior to
board-certified neuropathologists interpreting conventional histological slides
in the same patients
• SRH and CNN pathway can be used as a standalone method for brain tumor
diagnosis when pathology resources are unavailable and also to improve
diagnostic accuracy in neuropathology, even in well-staffed centers
Application of AI in pathology
Finding tumour deposits within lymph nodes – as a screening method
to reduce the work-load of pathologists
Detection and grading of cancer – whether the tissue studied harbour
tumor or not, if present to do tumour typing, Grading
Gleason grading have been achieved in prostate cancer
method to automatically detect mitotic figures in breast cancer
tissue sections
Support and improve the quantification, especially in
immunohistochemistry (stain intensity), resulting in accurate
measurement, therby reproducibility
Measurements of tumour sizes or distances to resection margins
will be more objective compared to classical microscopy.
Pre-check of Papanicolaou stained gynaecological cytology in
cervical cancer screening
Enables to create smooth workflow
prioritization of caseloads and workload balancing
to reallocate cases across a network - enables the experts doing
expert work and generalists doing general pathology
Text Feature Extraction - connected to pathology-reports, structure
or extract specific text parts from routine pathology reports for
further scientific purposes.
 Text Interpretation and Coding Error Prevention - preventive error
correction, For example, instead of well-differentiated, invasive
breast carcinoma (NST), G2, it should immediately highlight that
“well differentiated” and G2 is incompatible
Limitations Of AI
• Heavily data dependant - Training an algorithm requires huge dataset often
numbering in thousands of example unlike human brain
• Training large AI models produces carbon emissions nearly five times
greater than the lifecycle carbon emissions of an automobile
• Image complexity - i.e., the large number of features needed to define the
samples (instances) within the dataset
• Current algorithms are trained to perform a specific task with
limited capability of applying the same to new problem
• The detection of implicit bias is made difficult in that we have no
true understanding of how that network is processing its inputs to
arrive at the correct outputs. In other words, there are problems
with explainability and interpretability.
For the development of AI in pathology, there must be a close
interaction between the pathologists (as content providers, content
annotators, and ground truth arbiters for training) and the computer
scientists who devise the appropriate machine learning algorithms for
use
Conclusion
• Human pathologists will not be replaced by computers
• By removing the mysticism surrounding AI, it becomes clear that it is
a silicon-based digital assistant for the clinical and the research
laboratories rather than an artificial brain that will replace the
pathologist
• Integration of machine learning into routine care will be a milestone
for the healthcare sector in the next decade
• Machine learning facilitated the transition from image analysis to
image interpretation.
• AI is also called the third revolution in pathology, after the invention
of immunohistochemistry and next-generation sequencing
• Deep learning is capable of
a) Feature engineering
b) Feature learning
• Commonly used artificial neural networks are
CNN and GAN
• Predominantely used Artificial neural network in
• Classification -
• Segmentation-
• Image translation -
• In Silico labelling -
Classification – CNN
Segmentation – U-net
Image translation – GAN
In Silico labelling - cGAN
• Study by Hollon et al conluded combination of what is comparable
with diagnosis of experts
Stimulated Raman histology (SRH) and CNN
Future prospects
The entire field of histopathology is experiencing rapid developments
across all domains.
Opportunities and challenges include:
(a) The wider deployment of conventional whole-slide scanning
capabilities that can finally enable the entry of slide-interpretation into
the digital realm;
(b) The use of AI to enhance the performance of slide-scanning by
interposing color normalization, denoising, sharpening, and depth-of-field
enhancements with
specialized AI tools prior to classification steps;
(c) The use of AI to dramatically simplify optomechanical
requirements for scanners
• by correcting for lower raw scan performance with computational
tools, thus decreasing instrumentation costs, scan times, and
inadequate scan performance issues;
• (d) Combining novel slide-free imaging hardware and AI to maximize
both information content and similarities with conventional H&E-
stained slide appearance
(e) Getting pathologists, surgeons, administrators, the FDA, and payors
comfortable with the above.
These are exciting times, as pathology finally seems poised to take advantage
of notable advances in microscopy and computation.
REFERENCES
1. Russakovsky O, et al. ImageNet large scale visual recognition challenge.
International Journal of Computer Vision 2015;115(3):211e52.
2. Cire¸san D, Meier U, Masci J, Schmidhuber J. Multi-column deep neural
network for traffic sign classification. Neural Networks 2012;32:333e8.
3. He K, Zhang X, Ren S, Sun J. Deep residual learning for image
recognition. In: 2016 IEEE conference on computer vision and pattern
recognition (CVPR); 2016. p. 770e8.
4. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener
B. Histopathological image analysis: a review. IEEE Reviews in
Biomedical Engineering 2009;2:147e71.
https://doi.org/10.1109/RBME.2009.2034865.
5. Long J, Shelhamer E, Darrell T. Fully convolutional networks for
semantic segmentation.In: 2015 IEEE conference on computer
vision and pattern recognition (CVPR);2015. p. 3431e40.
6. Hollon TC, et al. Near real-time intraoperative brain tumor diagnosis
using stimulated Raman histology and deep neural networks. Nature
Medicine 2020;26(1):52e8.
7. Xu J, Luo X,Wang G, Gilmore H, Madabhushi A. A Deep
convolutional neural network for segmenting and classifying
epithelial and stromal regions in histopathological images.
Neurocomputing 2016;191:214e23.
AI IN PATH final PPT.pptx

AI IN PATH final PPT.pptx

  • 1.
    ARTIFICIAL INTELLIGENCE IN PATHOLOGY PRESENTOR :DR. SAIAKSHAY MODERATOR : DR. K K SURESH
  • 2.
    CONTENTS • INTRODUCTION • DIGITALPATHOLOGY • ARTIFICIAL INTELLIGENCE (AI) • ARTIFICIAL INTELLIGENCE IN PATHOLOGY • APPLICATION OF AI IN PATHOLOGY • LIMITATIONS • CONCLUSION • FUTURE PROSPECTS • REFERENCES
  • 3.
    INTRODUCTION • Automation inPathology are being currently employed in automated tissue processing, staining and generation of reports • It progressed to introduction of digital pathology which are now under process of approval for primary diagnosis • Digital Pathology(DP) allows remote diagnostic reporting with flexible hours, ensures service delivery even in challenging situations such as COVID-19 pandemics and facilitates second and expert opinion reporting
  • 4.
    • DP enablesimplementation of Artificial intelligence based algorithms in routine practice • The integration of AI will be the milestone for healthcare in the next decade
  • 5.
    Digital pathology • Itis the practice of pathology using digital imaging for clinical and nonclinical purpose • A computer based environment using Digital Pathology System as opposed to a microscope based one. • It involves scanning and converting of glass slides into high resolution digital images. • Digital slides can then be viewed, analyzed, shared, interpreted and stored using digital pathology system.
  • 6.
    Digital Pathology Ecosystemconsist of 3 major components 1. Information system – access to hospital information system, electronic medical record, laboratory information system and radiology picture and communication system (PACS) 2. Digital pathology system – 2 connected subsystem – device to acquire and manage digital image(WSS) and workstation to view or share images 3. System tools – to analyse and manipulate the digital images eg: image analysis algorithm
  • 8.
    Telepathology • Telepathology technologiesallow light microscopic examination and diagnosis from a distance. • These systems may be combined with (Digital learning) DL algorithms to assist the pathologist to screen a remote case for consultation or extract features from these images for assisted diagnosis. • Telepathology use cases include primary diagnosis, teleconsultation, intraoperative consultation, telecytology, tele microbiology, quality assurance,and tele education
  • 9.
    • Dynamic systemsare remote-controlled microscopes that provide the pathologist with a “live” view of the distant microscopic image while allowing them to control the stage, focus, and change the magnification, remotely TELEPATHOLOGY DYNAMIC STATIC HYBRID
  • 10.
    • The secondmethod of telepathology is “store-and-forward” telepathology • In which digital images are first captured via cameras or acquired via WSI systems for transmission to the consultant • These images can be captured in standard file formats and can be viewed via a remote viewer.
  • 11.
    • The thirdmethod of telepathology is the use of hybrid systems • Which can combine features of both “store-and-forward” and dynamic systems • Hybrid telepathology systems can provide a “live” view using robotic-controlled microscopy with high-resolution still image capture and retrieval. This dual functionality makes them valuable • Some of the hybrid systems in the market today have capabilities for WSI as well
  • 12.
    Digital images • DPI- composed of pixels arranged in rows and column • Source - autopsy, grossing room, cameras mounted on microscopes or scanners, static images, live streaming and most recently whole slide imaging
  • 13.
    WHOLE SLIDE IMAGING •@Virtual microscopy • Process of generating whole slide image using whole slide scanner • A whole slide scanner is an automated robotic microscope capable of digitizing glass slides • Consist of microscope, robotics, hardware and software that scan/ digitalize glass slide to generate whole slide image
  • 14.
    • Basic featuresof WSS - automated barcode reading, automated tissue identification, autofocus, autoscanning and automated image compression • Whole slide scanners use digital cameras with sensors to capture each FOV using tile-or line-based scanning methods • Final whole slide image is produced by stitching of each FOV which represent a high resolution replica of the original glass slide
  • 15.
    • In specialconditions(cytology/blood smears), Z-stacking done. • Quality of images are affected by pre-imaging steps (tissue prep, staining), various acquisition & processing parameters (lens objective numerical aperture, digital camera sensor, compression) and display feature (monitor resolution) • Image viewing softwares used to microscopic view of the digital slides with image navigation, panning and zooming movements
  • 16.
    • In addition,annotations, taking measurements, screenshots, rotate images, display multiple images side by side, run image analysis algorithms can be done • In WSI, lower resolution planes are stacked above higher resolution regions in a pyramidal architecture so that during retrival, WSI uses multi-resolution plane to load a small region instead of the entire image • Examples of WSI viewers - OpenSlide (open-source viewer), ImageScope and Webscope (Leica), Sedeen (Pathcore), and IntelliSite (Philips)
  • 17.
    Uses of digitalimages • Telepathology for consultation • Multidisciplinary meeting (tumor boards), conferences, presentation • Documentation, pathology reports • Archiving • Quality assurance • Primary pathology diagnosis • Remote intra-operative consultation • Nonclinical – research, education, legal situations
  • 18.
    Advantage of Digitalimages over conventional slides • Easy portability • A tremendous advantage in digital pathology is quick access to archived slides of one patient for comparison of older slides to the actual slides and also to get quicker access to second opinions worldwide • Virtual immunohistochemistry service provided by large national laboratories.
  • 19.
    • WSI imagesprovide very rich and complex information about tissue or disease characteristics • It provides platform for development of automated or semi-automated image analysis methods, aiming to increase the diagnostic accuracy, reliability, reproducibility and efficiency by enabling quantitative image analysis • This leads to open doors for application of Artificial Intelligence in Pathology
  • 20.
    What is artificialintelligence • The term “Artificial Intelligence” is often used to describe the ability of a computer to perform what appears to be human level cognition • Rather than thinking, it utilizes information to improve its performance through learning
  • 21.
    Are AI, Machinelearning, Deep learning same? • AI is a branch of computer science about systems which can learn from data Example:-  Google predictive searches Product recommendations on amazon Music recommendations by Spotify Google maps ‘ fastest route’ • Machine learning is a branch of AI based on the idea that computer can learn from data, as humans learn from experience and make decisions about new data, without human interactions • Deep learning in turn is a subset of machine learning that uses artificial neural networks to classify information similar to human brain
  • 23.
    Deep learning • ML- the programmer determines the feature vectors (based on which detection happen) and define it through a process called as feature engineering • Neural network in DL can abstract out their own features(representations) from the inputted data all by itself – feature learning • These representation are formed in the layer between input and output called as hidden layer • Thus neural network learns rules by creating hidden variables from data
  • 24.
    Artificial neural network -modelwithin deep learning Model for deep learning are (very) loosely based upon the neural network architecture of the mamalian brain ( visual cortex)
  • 26.
    Types of Deeplearning Mainly five types of deep/artificial neural networks currently used for pathological image analysis: 1) CNNs, 2) FCNs, 3) GANs - have recently elicited increasing interest. 4) SAEs 5) RNNs dominant models used in digital pathology adopted in some literature but are not the major methods for pathological image computing
  • 27.
    • COMPUTER VISION A field of computer science trying to mimick human vision. Deep learning algorithms, like CNN’s perform well in computer vision tasks • GRAPHIC PROCESSING UNIT(GPU)  The chip on the computer’s graphics card designed to rapidly process images, typically found in powerful gaming computers.  Indispensable for fast processing of WSI. • PATCHING  The size of whole slide images is huge (+/-15GB=3*3h long HD movies on Netflix, they need to be divided in to smaller parts – patches . This process is called patching.
  • 28.
    • DATAAUGMENTATION  Away to get more input data for training our model when we don’t have more data by minimally altering structures of interest . eg- by rotating or flipping mitotic figures.
  • 29.
    Application of AIin daily life
  • 30.
    AI in Medicine •AI can mine electronic health records, journal and textbook materials, patient clinical support systems for epidemiologic patterns, risk factors and diagnostic predictions • Signal classification, such as interpretation of ECG and EEG recordings • Image analysis in Radiology
  • 31.
    • Studies onDeep learning shows Dermatology – detects SCC vs Benign seborrhoeic keratosis with diagnostic accuracy comparabe with dermatologist Used in detecting sight threatening retinal diseases using optical coherance tomography US FDA approved a deep learning-based autonomous AI diagnostic system to detect diabetic retinopathy on the images of RETINAL FUNDUS
  • 32.
    WHY AI INPATHOLOGY? • Precision medicine demands precision diagnostics • The pathologist’s diagnosis based on histological slides is at the centre of clinical management especially decisions on cancer care • This emphasizes the need of accuracy in histopathological diagnosis for personalized therapy
  • 33.
    • Discordance amongpathologist is common as seen in assessing certain subjective features like cytonuclear pleomorphism/degree of atypia, cellularity, counting mitotic figures, scoring of tumour infiltrating lymphocytes and assessment of proliferation (Ki67) index. • The end point of traditional microscopic slide evaluation is qualitative assessment of changes in cell number, morphology, distribution • AI is being anticipated to help pathologists in transition from providing just traditional qualitative (descriptive) reports to more quantitative results.
  • 34.
    • Due torapid increase in population and decrease in pathology workforce with even more shortage in sub specialities, there is requirement of automation to ensure smooth workflow • With development of AI applications in pathology, they can play a role of digital assitant to the pathologist
  • 35.
    Image Analysis • Imageanalysis is the extraction of meaningful information from images • Specialized image analysis software platforms are commercially available (e.g., Visiopharm, Indica Labs, Genie from Leica, Definiens, Virtuoso from Roche)
  • 36.
    Training set presentedto program to differentiate between 2 entities Program starts making guess Guesses are refined through multiple known examples Generation of internal models to finally make the right guess Tested on test set/ validation set before using this model on unknown unlabelled date
  • 37.
    • Software algorithmshave been developed to identify rare events (e.g., screening for microorganisms, counting mitoses, and detecting micro metastases in lymph nodes) and quantify stains (e.g., most commonly breast biomarkers) and various features (e.g., extent of tissue fibrosis and degree of liver steatosis). • They can also analyze spatial patterns and feature distribution better than humans.
  • 38.
    • Image algorithmsinclude several steps such as image preprocessing (e.g., color normalization) following which DL technologies can be applied to perform  Classification,  Segmentation,  Image translation,  image style transfer.
  • 39.
    Classification • Refers tothe task of assigning a label to an image. • Automatic classification of tissue structures and subtypes can also be extremely useful to augment and improve the histopathology workflow • The leading algorithms for image classification are convolutional neural networks (CNNs),
  • 40.
    Segmentation • Refers tothe task of assigning labels to specific regions of an image- can be seen as a pixel-level classification task. • Segmentation of subcellular structures can be useful for automating common tasks - cell enumeration (via nuclei counting), determination of intracellular locations of molecular markers, analysis of subcellular morphological features such as nuclear size, eccentricity and chromatin texture • The most common deep learning architecture for segmentation is U-nets
  • 41.
    • Multilabel classificationis based on image segmentation into regions of interest
  • 43.
  • 44.
    Application of segmentation •Detecting mitoses is a difficult problem because of inter and intra- observer variability • Breast tumors are good examples of cases in which proliferation has prognostic significance • In 2013, the Assessment of Mitosis Algorithms (AMIDA13) Study was launched and two-step object detection approach used (first step identified candidate objects; classified in the second step as mitoses or non-mitoses)
  • 45.
    Image Translation • Refersto the conversion of one image representation to another image representation such as converting between night & day images, winter & summer images, satellite images to maps. • In computational pathology, it has been explored for stain color normalization and for converting between different image modalities • The most common models used for image translation tasks are generative adversarial networks(GAN) or their variants
  • 46.
    Style Transfer • Styletransfer focuses on transferring the style of a single reference image to the content of the input image in order to generate an output image • It is also used for stain-color normalization - input image converted to have the color gamut and structural color assignments that mimic a target, reference image
  • 47.
    Image enhancement • Imageanalysis methods find it difficult to cope with variability between individual stained whole-slide (WSI) images due to different scanners, staining procedures, and institutional environments • Algorithms intended to reduce variability between and within microscopy image datasets developed, collectively designated as stain color normalization methods • It includes color deconvolution, clustering based algorithm and recently, deep leaning algorithm
  • 48.
  • 49.
    Mode switching • Novelimage modalities can be difficult to interpret. Quantitative phase imaging - lower phototoxicity and portability Confocal laser endomicroscopy(CLE) - allows for in vivo real-time analysis of tissue • Deep neural networks can be used to convert these images to H&E- like images for better interpretation. • A GAN was trained to convert the QPI(Quantitative phase imaging) images to digital H&E stained images. • Neural style transfer was used to digitally stain CLE images to H&E
  • 50.
  • 51.
    In silico labeling •The process in which stain-free brightfield images reconstructed to molecularly stained images, highlighting fluorescently labelled cellular structures • Rana et al used cGANs to computationally stain images of unstained paraffin-embedded WSIs.
  • 53.
    Extended depth-of-field • High-qualityWSIs needs high-numerical aperture lenses to capture high-resolution images but with restricted fields of view, increasing the scanning time, as well as shallow depths of field • An example of the use of deep learning to increase depth of field is an implementation of a CNN to greatly increase the speed at which automated scanning and diagnosis of malaria could be made
  • 54.
    DENOISING • Another contributionof AI that can markedly improve image quality is reduction of noise. • Images with low signals can be difficult to interpret esp. in vivo imaging challenging • U-net architecture was successfully utilized for image denoising of fluorescent microscopy images
  • 56.
    APPROACHES TO RAPID HISTOLOGY INTERPRETATIONS •Histological data is at the center of decision-making in tumor surgery esp. in brain • Relies on frozen section evaluation but time(30-45 min from time of reception) and labor intensive to prepare the histological slide and logistics barrier incase of off-site lab
  • 57.
    • Slide-free histologyhave been proposed and validated to varying degrees. • The most promising tools for imaging unprocessed or minimally processed biological specimens • Microscopy with ultraviolet surface excitation(MUSE), • Stimulated Raman histology (SRH) • Lightsheet microscopy • Only SRH has been incorporated into an AI-based diagnostic pipeline validated for brain tumor diagnosis in a clinical setting.
  • 59.
    • Hollon etal. adapted the Inception-V2 CNN to diagnose the 10 most common brain tumors using SRH images and deployed their software on a bedside SRH imager. In a head-to-head clinical trial of 278 patients, • Study by Hollon et al. concluded that autonomous diagnostic pathway utilizing SRH and a CNN could predict diagnosis which was non-inferior to board-certified neuropathologists interpreting conventional histological slides in the same patients • SRH and CNN pathway can be used as a standalone method for brain tumor diagnosis when pathology resources are unavailable and also to improve diagnostic accuracy in neuropathology, even in well-staffed centers
  • 60.
    Application of AIin pathology Finding tumour deposits within lymph nodes – as a screening method to reduce the work-load of pathologists Detection and grading of cancer – whether the tissue studied harbour tumor or not, if present to do tumour typing, Grading Gleason grading have been achieved in prostate cancer method to automatically detect mitotic figures in breast cancer tissue sections
  • 61.
    Support and improvethe quantification, especially in immunohistochemistry (stain intensity), resulting in accurate measurement, therby reproducibility Measurements of tumour sizes or distances to resection margins will be more objective compared to classical microscopy. Pre-check of Papanicolaou stained gynaecological cytology in cervical cancer screening
  • 62.
    Enables to createsmooth workflow prioritization of caseloads and workload balancing to reallocate cases across a network - enables the experts doing expert work and generalists doing general pathology
  • 63.
    Text Feature Extraction- connected to pathology-reports, structure or extract specific text parts from routine pathology reports for further scientific purposes.  Text Interpretation and Coding Error Prevention - preventive error correction, For example, instead of well-differentiated, invasive breast carcinoma (NST), G2, it should immediately highlight that “well differentiated” and G2 is incompatible
  • 64.
    Limitations Of AI •Heavily data dependant - Training an algorithm requires huge dataset often numbering in thousands of example unlike human brain • Training large AI models produces carbon emissions nearly five times greater than the lifecycle carbon emissions of an automobile • Image complexity - i.e., the large number of features needed to define the samples (instances) within the dataset
  • 65.
    • Current algorithmsare trained to perform a specific task with limited capability of applying the same to new problem • The detection of implicit bias is made difficult in that we have no true understanding of how that network is processing its inputs to arrive at the correct outputs. In other words, there are problems with explainability and interpretability.
  • 66.
    For the developmentof AI in pathology, there must be a close interaction between the pathologists (as content providers, content annotators, and ground truth arbiters for training) and the computer scientists who devise the appropriate machine learning algorithms for use
  • 67.
    Conclusion • Human pathologistswill not be replaced by computers • By removing the mysticism surrounding AI, it becomes clear that it is a silicon-based digital assistant for the clinical and the research laboratories rather than an artificial brain that will replace the pathologist • Integration of machine learning into routine care will be a milestone for the healthcare sector in the next decade
  • 68.
    • Machine learningfacilitated the transition from image analysis to image interpretation. • AI is also called the third revolution in pathology, after the invention of immunohistochemistry and next-generation sequencing
  • 69.
    • Deep learningis capable of a) Feature engineering b) Feature learning • Commonly used artificial neural networks are CNN and GAN
  • 70.
    • Predominantely usedArtificial neural network in • Classification - • Segmentation- • Image translation - • In Silico labelling - Classification – CNN Segmentation – U-net Image translation – GAN In Silico labelling - cGAN
  • 71.
    • Study byHollon et al conluded combination of what is comparable with diagnosis of experts Stimulated Raman histology (SRH) and CNN
  • 72.
    Future prospects The entirefield of histopathology is experiencing rapid developments across all domains. Opportunities and challenges include: (a) The wider deployment of conventional whole-slide scanning capabilities that can finally enable the entry of slide-interpretation into the digital realm; (b) The use of AI to enhance the performance of slide-scanning by interposing color normalization, denoising, sharpening, and depth-of-field enhancements with specialized AI tools prior to classification steps;
  • 73.
    (c) The useof AI to dramatically simplify optomechanical requirements for scanners • by correcting for lower raw scan performance with computational tools, thus decreasing instrumentation costs, scan times, and inadequate scan performance issues;
  • 74.
    • (d) Combiningnovel slide-free imaging hardware and AI to maximize both information content and similarities with conventional H&E- stained slide appearance (e) Getting pathologists, surgeons, administrators, the FDA, and payors comfortable with the above.
  • 75.
    These are excitingtimes, as pathology finally seems poised to take advantage of notable advances in microscopy and computation.
  • 76.
    REFERENCES 1. Russakovsky O,et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision 2015;115(3):211e52. 2. Cire¸san D, Meier U, Masci J, Schmidhuber J. Multi-column deep neural network for traffic sign classification. Neural Networks 2012;32:333e8. 3. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR); 2016. p. 770e8. 4. Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener B. Histopathological image analysis: a review. IEEE Reviews in Biomedical Engineering 2009;2:147e71. https://doi.org/10.1109/RBME.2009.2034865.
  • 77.
    5. Long J,Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation.In: 2015 IEEE conference on computer vision and pattern recognition (CVPR);2015. p. 3431e40. 6. Hollon TC, et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nature Medicine 2020;26(1):52e8. 7. Xu J, Luo X,Wang G, Gilmore H, Madabhushi A. A Deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing 2016;191:214e23.

Editor's Notes

  • #6 Telepathology- practice pathologgy at distance ? TELEPATHOLOGY – 4 types 1) static/ store and forward image based system 2) dynamic/ real time 3) hybrid system 4) virtual slide system/WSI * History of digital pathology –macroscopic examination  microscopic examination different ight source (halogen, fluorescent, LED)  Phtomicroscopy (camera coupled to microscope to produce analog phtos) didital microscopic images – static immages of particuar field then video microscopy for whole slide,robotic microscopy (can change focus objective move stage)  intial digital immages took lonertime to geenrate and of larger storage size of arrounf 210gb  now avaolable wsi can take images in minutes and of small file storage
  • #7 # DPS – combined hardware ( Whole slide scanner workstation, and server repository) and software used to aquire, manage, view and analyse whole slide scanner
  • #10 An advantage of the dynamic systems is that they give the remote pathologist control and flexibility in viewing any region of the entire slide and do so at any power. These systems tend to be more expensive with proprietary software requirements .
  • #11 In the future, this mode of telepathology may be used to send images to the cloud, where image analysis may be performed or DL algorithms may be run to prescreen for rare event detection or for diagnosis or grading of cancer or for identifying microorganisms
  • #13 DPI- digital pathology image *For WSS, image capture resolution is important quality metric; resolution for 20x ranges from 0.24 to 0.5 um/pixel and for 40x from 0.1375 to 0.25um/pixel
  • #14 FOV – field of vision eg: Leica/Aperio ScanScope XT, Leica/Aperio CS2, Sakura VisionTek M6 A WSI scanner is basically a microscope that is under full robotic control, attached to highly specialized cameras which contain high-performance photosensors.
  • #16  1) i.e scanning the same glass slide at different focal planes along the Z-axis and stacking the images atop each other to produce a final composite (z-stack) multiplane image Lossless compression is preferred over lossy algorith to avoid data loss whole slide images can be compressed to highlevels (upto 32:1) before impacting interpretation performance
  • #17 * 3) other eg NDP (Hamamatsu), Pannoramic Viewer (3DHISTECH),
  • #18  * creation of teaching sets of rare and archival material
  • #19 After the remote reference laboratory performs technical staining and slide scanning services, the referring pathologist is provided with full access to these immuostained slides for their interpretation or referral to a teleconsultant
  • #23 SL- BASED ON THE DATA UL- BASED ON GROUPING UNLABELLED DATA POINTS WITH SIMILAR FEATURES TOGETHER RL- WHERE THE USER GIVES THE FEEDBACK TO THE SYSTEM, WHENEVER THE SYSTEM LABELS DATA INCORRECTLY RNN- RECURRENT NEURAL NETWORK
  • #24 * since we know the inputs and receive output data, bot these layers are known to us. In contrast, the layers in between are not readily accessible to the observer input – image data Output – prediction
  • #25 . In this example, only a single hidden layer is shown. In practice, multiple hidden layers are used. The basic features of an ANN are as follows: • Each input neuron gets only one input, directly from outside; those of the hidden layers get many. • Neurons in a layer do not connect to each other. • Each neuron in one layer connects to all the neurons in the next layer. • The ANN is a synchronous model; all nodes in a given layer receive their inputs and calculate their activities at the same time. This means that all the calculations in the n layer must be calculated before we can start the calculations for the (n+1) layer. the output of one layer becomes the inputs of the next
  • #26 .convolutional neural network input large amounts of data with limited human labels. Convolutional and pooling layers may feed fully connected layers which in turn drive an output classifier. Errors in the output classifier are opportunities for human intelligence to correct the system and feed these new labels back into the input of the system.
  • #27 convolutional neural networks (CNNs), fully convolutional networks (FCNs), generative adversarial networks (GANs), stacked autoencoders (SAEs), and recurrent neural networks (RNNs) 1st CNN is “Alexnet”
  • #30 Modern safety features in car Facial recognition, APPLE – uses SIRI!! GOOGLE- uses google assistant !! AMAZON – uses ALEXA!!
  • #33  * For example, understanding if a specific treatment for an individual patient’s cancer may help their treatment or infact be detrimental to their overall survival, as is the case with cetuximab treatmentin colorectal cancer * This process of treating the patient as an individual, and not as a member of a broader and heterogeneous population, is commonly termed precision medicine
  • #35 * 800 neuropathologists cover the 1400 centers performing brain tumor surgery * nonexpert pathologists are compelled to provide services that could better be provided by a more specialized practitioner *lack of neuropathology expertise probably leads to delays in intraoperative diagnosis, unnecessary surgical intervention (where a lymphoma or germinoma is misdiagnosed as a tumor where surgery might be required), and a limitation on how pathology can be used to formulate surgical strategy.
  • #37 Training set – dataset c/o known examples * test set – known examples that are not used for training
  • #39 detection, segmentation, feature extraction, classification, and quantification which can be assisted through the application of deep learning technologies
  • #40 * Cnn have demonstrated better-than-human performance on various benchmark datasets [1e6], although their real-world performance across novel institutions and differently curated collections remains to be determined. Avariety of CNN architectures have been developed,
  • #41 * Image segmentation used to identify features of interest within the regions of interest *subcellular st such as nuclei and cytoplasm and membrane compartments, * U-nets comprise a contracting feature-extraction path and an expanding up-sampling path, finally outputting a segmentation map.
  • #42 Semantic segregation is separation on the basis of classes within the image without information as to the relative importance of these classes * Object detection involves localizing and classifying the object of interest within the image and either highlighting it for human action or identifying it for further analysis * instance segmentation, in which each of the dogs is individually (and uniquely) identified. In other words, in instance segmentation, we separately identify each instance within a given class, as compared to semantic segmentation, where we identify each class but not the individual elements of that class.
  • #43 Semantic segmentation. (A) An SRH mosaic image of a specimen from a glioblastoma patient contains areas of (B) dense tumor infiltration, tumor-brain interface as well as the gliotic peritumoral regions. Tiles from the mosaic are fed into a semantic segmentation algorithm (C) that are classified by a convolutional neural network (CNN) which then assigns a numerical value, depicted by color (red green or blue) corresponding to the likelihood that a given region contains tumor, nondiagnostic tissue, or nonlesional tissue. SRH, stimulated Raman histology; CNN, convolutional neural networks (CNN). * Annotation for nuclei detection compared with segmentation. From left to right: H&E image, image overlaid with boxes indicating detected nuclei, and image overlaid with nuclei boundary segmentation. In a detection task, the objective is to determine presence of nuclei without exact boundaries. In a segmentation task, the objective is to determine nuclei membership for each pixel accounting for overlaps and touching boundaries. The required annotation for a segmentation task is more demanding than detection. * mitosis
  • #44 Example nuclear features. Elongation of an individual nucleus can be computed from the ratio of major and minor axes of the ellipsoid. The variability of the orientation of the major axis provides a measure of the level of organization in a tissue. Density of nuclei within a given compartment is measured as count of nuclei (or area of nuclei) divided by the area of the compartment. Graphs can be used to provide a summary of the morphological relationships and structure of nuclei clusters
  • #46 * Generative adversarial networks utilize both a generator and discriminator network. The generator network takes in an input and generates an image. The discriminator tries to classify between real and generated images. The discriminator tries to maximize correct classification in the form of a GAN loss, while generated images try to minimize it. (D) CycleGANs contain two sets of generators and discriminators to convert between two domains in both directions. * leverages multiple neural networks working “against” each other to transform images of different classes (Fig. 7.1C). The basic premise is that one neural network (the “generator”) is trained to take images of one class and transform them into a second class, for example, taking zebras and turning them into horses. The second neural network (the “discriminator”) is trained to discriminate real images of horses from fake images generated by the first network. This is done cyclically until the images generated cannot be classified accurately as generated or real by the discriminator
  • #47 * Style transfer can be thought of as a single-sample version of image-to-image translation
  • #50  * GAN, generative adversarial network; QPI, quantitative phase imaging; CLE, confocal laser endomicroscopy QPI – ussed in prostate and colorectal cancer CLE – used in GI biopsy
  • #51 CEL – identification of metastatic LN in lung adeno ca. * multiphoton microscopy images to H&E-stained images.
  • #55 * in vivo microscopic inspection of tumor margins allowing for even better resection margins than obtained via frozen section, as there is no tissue lost due to sectioning or embedding *. Denoising could be applied to all of the previous fluorescent methods which could replace frozen if they could offer either better-quality histology, or be faster.
  • #57 numerous decisions that must be made by surgeons at the time of an operation on a brain lesion To ensure whether the lesion is biopsied or not If yes differentiating neoplastic from nonneoplastic surgical and nonsurgical tumors primary and metastatic tumors, low- and high-grade gliomas, circumscribed and diffuse tumors all have specific implications for decision-making Tumor margin
  • #59 performed on minimally processed or unprocessed tissue to generate diagnostic histologic images without the need for the resources of a conventional pathology lab. Each technique relies on optical, rather than physical sectioning of imaged tissue, simplifying the workflow required to generate a diagnostic histologic image. SRH has unique value as a modality for bedside histology as no staining, clearing, or handling of the tissue is required to generate a histological image. In SRH, fresh specimens can be placed into an imager at bedside which rapidly creates a virtual H&E microscopic image through relatively simple spectral band combination and recoloring. To this end, academic-industrial partnership has resulted in the first FDA-registered imager, now employed in several operating rooms across the United States
  • #60  The accuracy of both arms was approximately 94% and interestingly the errors in both arms were entirely nonoverlapping
  • #61 Sentinal LN evaluation – esp in breast ca, malignant melanoma 1) The main task for AI in this field should be the detection of (small) tumour cells aggregates as a screening method before the stained slides of a case will be seen by a human pathologist. However, the final decision about a metastasis or an artefact should stay in the hands of humans - reduce the work-load of pathologists. 2) tumour typing - differentiate between benign, premalignant and malignant * tumor grading using architectural and cytological details (used to determine if a lesion is cancerous, evaluate the degree of malignancy from subtle morphological differences, predict clinical behavior, and suggest therapeutic Strategies) Studies on breast ca and prostate ca based on convolutional neural networks was published
  • #63 Digital workflow also contributes to our ability to share cases with multiple colleagues simultaneously. We can start creating more smart worklists particularly aiming in the prioritization of caseloads and workload balancing. It is now possible to reallocate cases across a network, which in turn enables the experts doing expert work and generalists doing general pathology
  • #64 Especially all the reports could be better used for any scientific purpose if it would be easier to search for different disease entities . As a training set thousands, if not millions of already written reports could be taken to learn what is correct and what is outside of this reasonable conduct
  • #65  *autonomous driving algorithms are developed by data scientists and computer learning experts, with relatively minor input from professional drivers.
  • #66 there is no one model that works best for every problem. An important practical question involves the choice of an algorithm to solve a given Problem If one trains a CNN to recognize prostatic cancer and another to recognize breast cancer, they will only work on prostate and breast cancer, respectively * Interpretability is the ability to predict the effect of a change in input or algorithmic (tunable) parameters. * Explainability is the ability to understand the basis upon which the algorithm is drawing its conclusion * high-trained pathologist as to how he or she arrived at a diagnosis, they will often refer to the “years of experience” that led to the diagnosis. to justify the decision already intuitively made. Explainability – why? Interpretability - how
  • #68 Instead, the computer will serve as a highly capable “digital assistant,” and this synergistic pairing of human intelligence (HI) and AI will ensure that pathology will be a major driver of personalized medicine.
  • #71 Classification – CNN Segmentation – U-net Image translation – GAN In Silico labelling - cGAN