CONVOLUTION OPERATION
A convolution operation operates on all the pixel
values within its kernel's receptive field,
producing a single value by essentially multiplying
the kernel weights with the pixel values
elementwise and adding a bias term to the result.
This reduces the dimensions of the input matrix
as well.
parameter sharing
• Convolution Neural Networks have a couple of
techniques known as parameter sharing and
parameter tying. Parameter sharing is the
method of sharing weights by all neurons in a
particular feature map. Therefore helps to
reduce the number of parameters in the
whole system, making it computationally
cheap.
EQUIVARIANT REPRESENTATION
• CNNs are famously equivariant with respect to
translation. This means that translating the
input to a convolutional layer will result in
translating the output
convolution operation padding
• padding is a technique used to preserve the
spatial dimensions of the input image after
convolution operations on a feature map.
Padding involves adding extra pixels around
the border of the input feature map before
convolution
STRIDE
• The number of pixels turning to the input
matrix is known as the strides. When the
number of strides is 1, we move the filters to 1
pixel at a time. Similarly, when the number of
strides is 2, we carry the filters to 2 pixels, and
so on.
RELU
• Usually, the image is highly non-linear, which
means varied pixel values. This is a scenario
that is very difficult for an algorithm to make
correct predictions. RELU activation function is
applied in these cases to decrease the non-
linearity and make the job easier
CONVOLUTIONNEURAL NETWORK OPERATION.pptx
CONVOLUTIONNEURAL NETWORK OPERATION.pptx

CONVOLUTIONNEURAL NETWORK OPERATION.pptx

  • 1.
    CONVOLUTION OPERATION A convolutionoperation operates on all the pixel values within its kernel's receptive field, producing a single value by essentially multiplying the kernel weights with the pixel values elementwise and adding a bias term to the result. This reduces the dimensions of the input matrix as well.
  • 3.
    parameter sharing • ConvolutionNeural Networks have a couple of techniques known as parameter sharing and parameter tying. Parameter sharing is the method of sharing weights by all neurons in a particular feature map. Therefore helps to reduce the number of parameters in the whole system, making it computationally cheap.
  • 5.
    EQUIVARIANT REPRESENTATION • CNNsare famously equivariant with respect to translation. This means that translating the input to a convolutional layer will result in translating the output
  • 7.
    convolution operation padding •padding is a technique used to preserve the spatial dimensions of the input image after convolution operations on a feature map. Padding involves adding extra pixels around the border of the input feature map before convolution
  • 9.
    STRIDE • The numberof pixels turning to the input matrix is known as the strides. When the number of strides is 1, we move the filters to 1 pixel at a time. Similarly, when the number of strides is 2, we carry the filters to 2 pixels, and so on.
  • 10.
    RELU • Usually, theimage is highly non-linear, which means varied pixel values. This is a scenario that is very difficult for an algorithm to make correct predictions. RELU activation function is applied in these cases to decrease the non- linearity and make the job easier