What are the advantages and disadvantages of membrane structures.pptx
DigitRecognition.pptx
1. BACKGROUND • Now a day’s advances have to made to recognize
the symbols that were once meaningful and
understandable only by humans. It is very easy for
humans to understand images, but the same
image is very difficult for computer to comprehend.
Driverless cars read symbols which were once
difficult to understand by computers.
• Experiments results on benchmark database of
MNIST handwritten digit images show that the
performance of our algorithm is remarkable and
demonstrate its superiority over several existing
algorithms.
2. OBJECTIVE • In the current age of digitization, handwriting
recognition plays an important role in information
processing
• The main objective of the project is to solve the
problem where the computer needs to recognize
the digits in real time.
• With the power of parallel computing, the intention
of the project is to solve the real-world problem of
recognizing digits through 28000 images which
may appear everywhere in our day to day life.
5. COMMAND FOR
ENTERING
RESERVATION:
• srun -p reservation --reservation=csye7105-
gpu --gres=gpu:4 --mem=2Gb --export=ALL --
pty /bin/bash
• Command for getting GPU information: nvidia-
smi
• Command for getting CPU information: lscpu
6. DATASET
SPECIFICATIONS
• Each image is 28 pixels in height and 28 pixels in
width, for a total of 784 pixels in total. Each pixel
has a single pixel-value associated with it,
indicating the lightness or darkness of that pixel,
with higher numbers meaning darker. This pixel-
value is an integer between 0 and 255, inclusive.
• The training data set, (train.csv), has 785 columns.
The first column, called "label", is the digit that was
drawn by the user. The rest of the columns contain
the pixel-values of the associated image
7. DATA INFORMATION
Data Size – 167 MB Number of columns in
training set – 785
Each image size – 28
pixel(height) * 28
pixel(width) – 784 pixel
total
10. WORKING
WITH GPU
• Pytorch has the package that supports for CUDA
tensor types, that implement the same function as
CPU tensors, but they utilize GPUs for
computation.
• PyTorch is an optimized tensor library for deep
learning using GPUs and CPUs.
• Entire Project was supported by the discover
cluster that was offered by Northeastern.
12. GPU AND
PYTORCH
Data parallelism is parallelization across
multiple processors in parallel computing
environments. It focuses on distributing the data
across different nodes, which operate on the data in
parallel.
CLASStorch.nn.DataParallel(module, device_ids=N
one, output_device=None, dim=0)
Implements data parallelism at the module level.
14. METHODOLOGY • PyTorch provides a module nn that makes building
networks much simpler. Here I have build the
same with 784 inputs, hidden units with 512, 256,
128, 64 neurons in each hidden layer, 10 output
units as we have 10 classes to classify and a
softmax output for multi-class classification.
15. PYTORCH
PYTORCH IS A PYTHON
PACKAGE THAT PROVIDES
TWO HIGH-LEVEL
FEATURES:
TENSOR COMPUTATION
(LIKE NUMPY) WITH
STRONG GPU
ACCELERATION
DEEP NEURAL NETWORKS
BUILT ON A TAPE-BASED
AUTOGRAD SYSTEM
19. CNN
IMPLEMENTATION
• Activation Function. The function that we pass the
input information through in a neuron. Used
Rectified Linear Unit (ReLU) as activation
function that is zero for negative x values and a
straight line for positive x values. ReLU is used
more frequently than sigmoid and tanh because
it’s more computationally effective
• I have used torch.Conv2d which Applies a 2D
convolution over an input signal composed of
several input planes.
21. CNN
IMPLEMENTATION
• Used torch.Conv2d which Applies a 2D
convolution over an input signal composed of
several input planes.
• The nn.Dropout2d used during training, randomly
zeroes some of the elements of the input tensor
with probability p using samples from a Bernoulli
distribution. Randomly zero out entire channels (a
channel is a 2D feature map, e.g., the jj -th
channel of the ii -th sample in the batched input is
a 2D tensor text{input}[i, j]input[i,j] ).
22. CNN
IMPLEMENTATION
• The output of previous layer act as the input to the
next layer and we calculate with below formulae-
• Output = ((input – Kernel_size + 2*padding)/strides
+1)
23. REASON FOR USING CUSTOM ARCHITECTURE
Firstly, because the input is of 786-
pixel rows of data that is to be trained
and state of art models requires
227x227 or 224x224 dimensional
inputs
The pretrained models have very
deep architectures which is not
required for the current dataset that is
used.
They may lead to overfitting and may
cause vanishing gradient problems
28. HYPOTHESIS
We assume that using more GPU
power always reduces the
computational time of any model
Larger batch size means better
predicted model output
Keeping number of GPUs constant,
there exists a linear relationship
between number of batches and time
taken.
31. RESULT
ANALYSIS
Using larger batch sizes does
not necessarily improve the
model Accuracy
Due to larger batch size there is
a chance that the generalization
capabilities is lost.
So the Hypothesis that the
greater Batch size results in
better model prediction does not
hold here.
32. CONCLUSION • If the memory usage is not optimal than using the
GPU, the model performance on more number of
GPU results in decrease in Overall performance
• There should be optimum usage of GPU and batch
size for better performance. Ideally the GPU usage
should be more than 90%, otherwise there won’t be
advantage of using more GPU and less data. Data
Parallelism will show poor result if under-utilized
high number of GPU is used for model parallelism.
• Keeping number of GPUs constant, there exists a
linear relationship between number of batches and
time taken
33. FUTURE IMPROVEMENTS SCOPE
THE RECOGNITION OF
DIGITS, A SUBFIELD OF
CHARACTER
RECOGNITION, IS SUBJECT
OF MUCH ATTENTION
SINCE THE FIRST YEARS
OF RESEARCH IN THE
FIELD OF HANDWRITING
RECOGNITION.
IMPROVED PERFORMANCE
HAVE BEEN OBSERVED
WHEN
FEATURE SELECTION
MULTIPLE CLASSIFIERS SYNTHETIC DATA
CREATION OF A DATABASE
WITH TOUCHING DIGITS,
SEGMENTATION BASED ON
AN INTELLIGENT PROCESS
IN ORDER TO REDUCE THE
SEGMENTATION PATH
CANDIDATES, POST-
PROCESSING
TECHNIQUES.