9953330565 Low Rate Call Girls In Rohini Delhi NCR
Project presentation - Capstone
1. CONVOLUTIONAL DEEP LEARNING
NEURAL NETWORK FOR RADIOLOGICAL
CHEST X-RAY DIAGNOSIS
ISDS 577 Capstone Seminar
Guided by Professor Daniel Soper
Team Members: Skandha Chinta,
Scott Cunningham, Apurva Desai,
Saket Dhamne
2. CAN A CONVOLUTIONAL NEURAL NETWORK BE USED
TO DETERMINE IF A PATIENT X-RAY HAS A
DIAGNOSABLE CONDITION?
One Tailed T-Test:
Is the mean grayscale value of an X-Ray with
a condition present > than the mean gray
scale value of a X-Ray with no condition?
Image Classification (Deep Learning CNN):
Identify whether an X-Ray has a diagnosable
condition with reasonable accuracy.
Research on identifying specific types of cancer
3. DATASET FROM NATIONAL INSTITUTE
OF HEALTH
108,948 X-Ray images
from 32,717 unique
patients from one hospital.
01
14 disease categories
image findings and
normal.
02
Made publicly available in
September 2017 for
research purposes.
03
4. NEURAL NETWORKS
o Assigns weights to input values and
runs through a ‘black box’ of
interconnected data points to
determine which nodes should receive
greatest weights for a given output set.
o Several types of Neural Networks for
different applications.
o Attributes of Convolutional Neural
Networks
• Difficult to train
• Very good at image classification
5. GOOGLE COLABORATORY
Google’s online testing space
(much like Jupyter)
Uses Gigabyte processors
free
Vastly reduces processing time with
millions of calculations
Python lab
6. DATA PRE-PROCESSING
Process Reason
Width x Height matching Consistency,
Aspect Ratio is off
AP Labeling AP is preferred over PA
Unique Patients To avoid bias
Latest patient images To avoid bias
Unusual and inaccurate
X-Rays
Eg: You guessed it right, a
PELVIS!
Data Reduction
7. PRE-
PROCESSING
: T-TEST
• Need to prepare data for T-Test:
• Transform images to mean
grayscale values
• Per image and per cancer type
• Separate into cancer types
• Check outliers, skewness, kurtosis
• Use Histograms, Boxplots and IQR
10. PRE-PROCESSING: NEURAL NETWORK
Convert all images from
.png to .jpg
Photoshop image filters to
emphasize disease
characteristics
• Crop, Median filter, Smart
Sharpen (edges)
• Exported images in only 2
colors (black & white)
Separated images into
Training (70%) and Testing
(30%)
• Within each divided by
Findings & No Findings
• Compatibility with
neural network
code
11. T-TEST ON THE BASIS OF VISUAL
PERCEPTION
Grayscale
Normal
Lung
Diseased
Lung
12. F-TEST TO PERFORM
THE CORRECT T-TEST
• Comparison of
Variance
• If F-Test Value ~= F
Critical Value
We choose T-Test for
two samples assuming
equal variances.
13. T-TEST
HYPOTHESIS:
• Null Hypothesis based on
Descriptive Statistics
H0 : μgNF >= μgF
• Alternate Hypothesis based on
Visual Perception
HA : μgNF < μgF
14. T-TEST RESULTS
• Reject the Null Hypothesis with
1% significance level.
• Mean grayscale value of diseased Lung X-
ray
is greater than that of a normal Lung X-
Ray.
• Although T-Test is a statistical test
different from image-classification
techniques, it can be considered a basis
for visual perception.
16. STEP 1: BUILDING A
CONVOLUTIONAL
LAYER
Number of
Convolutional
Layers - 5
01
Conv2D Function:
• Number of Input filters
varies with the
Convolutional Layer
• Kernel Size
• Activation function =
‘RELU’ (Rectified Linear
Unit)
02
17. STEP 2: POOLING
• Pooling layer is used to aggregate
information within a small region
of input features and then down
sampling the results
Max Pooling, Pool Size
Strides
Padding
Dropout Layer
19. STEP 4: FULL
CONNECTION LAYER
& OUTPUT LAYER
Purpose to create a fully connected layer
Fully Connected Layer is Hidden Layer
Number of nodes in Hidden Layer
Activation Function for Hidden Layer: RELU
Activation Function for Output Layer: Sigmoid
23. RESULTS OF CONVOLUTIONAL
NETWORK MODEL
Model Name Validation Loss Validation
Accuracy
Convolutional Network
Model (Version 55)
0.66 63.13%
24. PRE-TRAINED MODELS
• Different types of available pre-trained models:
• ResNet50 Model
• MobileNet Model
• Comparison of Different Models:
Model Name Validation Loss Validation Accuracy
Convolutional Network Model
(Version 55)
0.66 63.13%
ResNet50 Model 7.85 51.32%
MobileNet Model 6.82 47.50%
26. CONCLUSION: MODEL EVALUATION
• Validation Loss
• As close to 0 as possible
• Validation Accuracy
• As a percentage
• CNN Model outperformed pre-trained models: ResNet &
MobileNet
• Best image classification accuracy until 2012 was ~73.8%
27. FUTURE RESEARCH
• With continued tuning of Hyperparameters and image pre-
processing radiological tools can be developed to identify and flag
abnormal X-Rays for medical attention
• For Future research, we want to train the model to classify by cancer
type.
• Using heatmaps to identify neuron activation for different lesion
types.
Mentioning the accuracy was for a different dataset
Never-ending process
Metrics
Decreased the validation loss from around 8 to close to 0.66
Increased the validation accuracy from approximately 51% to 64% by changing the hyperparameters