2. INTRODUCTION
• Tensor flow is an open source machine learning library for research
and production.
• Tensor means having both magnitude and direction.
• Tensor flow can be represented in n-dimension.
3. Neural Networks
• Neural Networks which are inspired from biological neurons.
• Neural networks which has the following functions
• Neurons
• Connection Links
• Weights
• Activation Functions
• Convolutional Neural Network= Convolution + Neural Network
5. Convolution Layer
• Convolution is the first layer to extract features from an input image.
• Convolution preserves the relationship between pixels by learning image
features using small squares of input data.
• Convolution of an image with different filters can perform operations such
as edge detection, blur and sharpen by applying filters
• It is a mathematical operation that takes two inputs such as image matrix
and a filter or kernel
6. • Here if input is 50x50 matrix is multiplied with stride=2 and filter of
3x3 matrix we get an output which is called ‘feature map’ of size
25x25.
7. Strides
• Stride is the number of pixels shifts over the input matrix.
• When the stride is 1 then we move the filters to 1 pixel at a time.
When the stride is 2 then we move the filters to 2 pixels at a time and
so on
8. Padding
• Sometimes filter does not fit perfectly fit the input image Then
Pad the picture with zeros (zero-padding) so that it fits
9. Rectified Linear Unit (RELU)
• The main purpose of Relu is to introduce non-linearity
• The ReLU maps input x to max (0, x), that is, they map negative inputs
to 0, and positive inputs are output without any change as shown.
10. Pooling Layer
• Pooling layer reduce the number of parameters when the images are
too large.
• Different types of pooling:
• Max Pooling
• Average Pooling
• Sum Pooling
• Max pooling is widely used one as it take the largest element from
the rectified feature map.
11. Fully Connected:
• The layer is Fully Connected layer, we flattened our matrix into vector
and feed it into a fully connected layer like neural network.
• With the fully connected layers, we combined these features together
to create a model.
12. Softmax
• Finally, we have an activation function such as softmax or sigmoid to
classify the outputs.
• Softmax Converts outputs to probabilities by dividing output by
summation of all the values
• Forces the output be sum of 1
13. Cross-Entropy
• Cross-entropy is a loss function for which error has to be minimized.
• Compares the distance between output of softmax and one hot
encoding.
14. Dropout
• Dropout is an effective way of regularizing neural networks to avoid
the overfitting
• Drop out should be above 0.3
15. Result
• The performance was evaluated by using accuracy of the model with
actual vs predicted.
• How to find out best number of hidden layers for a model?
• If image size is nxn
• stride-=s
• Then no of hidden layers L should be (n/s)=a
• i.e., number of layers L should always be greater than a.