Biopesticide (2).pptx .This slides helps to know the different types of biop...
ย
Transfer learning with attenuation mechanism for mammogram image.pptx
1. Mammogram Image
Classification using Attention
Mechanism with Transfer
Learning
Dr. Munir Ahmad
Postdoc Software Engineering, University of Hertfordshire, UK
PhD Medical Physics (Imaging), University College London, UK
MSc Nuclear Engineering (Control Engineering), PIEAS, PK
MSc Physics (Electronics), Punjab University, PK
(munirahm@gmail.com)
2. Presentation Overviewโฆโฆ..
๏ What is image classification ?
๏ Networks used for image classification ?
๏ Standard Network models ?
๏ Mammogram dataset used ?
๏ Transfer Learning ?
๏ Attention mechanism ?
๏ Our model ?
๏ Results ?
๏ Conclusions ?
3. Cats and Dogs Image Classificationโฆ.
Neural Network Model
8. Pre-Trained versions of CNN
The ImageNet dataset
contains 14,197,122
annotated images according
to the WordNet hierarchy.
These images are from 1000
classes.
9. Attention
mechanism In the image below,
say we only need to
pay attention on this
Balloonโฆ.
We can simply
segment this balloon
and only work with it
instead of the rest of
the image by masking
the area with whiteโฆ.
10. Need
for
CAD
Modelโฆ.. ๏ Early diagnosis, better treatment
๏ Increased efficiency
๏ Increased accuracy
๏ To save radiologist time
๏ To save radiologist effort
๏ Socio economic effect โ (screening)
๏ Save costs involved
๏ Etc..
CA Breast is the 2nd most
common cancer in the world
and in Pakistan almost 45% of
cancers are CA Breast. The best
screening diagnosis available is
the mammography.
11. MIAS Datasetโฆ..
MIAS is a small dataset
Total number of images: 323
Malignant: 53
Benign: 63
Normal: 207
16. Methodologyโฆโฆ
True Positive: Positive as Positive (TrP)
False Positive: Positive as Negative (FaP)
True Negative: Negative as Negative (TrN)
False negative: Negative as Positive (FaN)
๐ด๐๐๐ข๐๐๐๐ฆ =
๐๐๐ + ๐๐๐
๐๐๐ + ๐น๐๐ + ๐๐๐ + ๐น๐๐
ร 100%
๐๐๐๐๐๐ ๐๐๐ =
๐๐๐
๐๐๐ + ๐น๐๐
ร 100%
๐ ๐๐๐๐๐ =
๐๐๐
๐๐๐ + ๐น๐๐
ร 100%
๐น1 =
๐๐๐๐๐๐ ๐๐๐ ร ๐ ๐๐๐๐๐
๐๐๐๐๐๐ ๐๐๐ + ๐ ๐๐๐๐๐
ร 2
Precision can be seen
as a measure of
quality, and recall as a
measure of quantity.
Higher precision
means that an
algorithm returns more
relevant results than
irrelevant ones, and
high recall means that
an algorithm returns
most of the relevant
results (whether or not
irrelevant ones are
also returned).
22. Conclusions
โข Test and training accuracy and loss is better for VGG16 without
attention layer.
โข ResNet50 performs better with attention layer for all malignant,
normal and benign cases.
โข Similarly, test and training accuracy and loss is better for ResNet50
with attention layer.
โข Feature extraction points need to be optimized โ Future Work.
โข Use of attention layer needs to be optimized โ Future Work.
โข For malignant cases, VGG16 perform equal with or without attention
layer for precision and recall.