ARTIFICIAL
NEURAL
NETWORK
2
CONTENTS
• BIOLOGICAL NEURON
MODEL
• ARTIFICIAL NEURAL
NETWORK
• TYPES OF ANN
• LEARNING
• APPLICATIONS
• ADVANTAGES
• DISADVANTAGES
• CONCLUSION
3
BIOLOGICAL NEURON MODEL
4
• A neuron carries electrical impulses. They are the basic units of the nervous system
and its most important part is the brain.
• Dendrite—It receives signals from other neurons.
• Soma (cell body)—It sums all the incoming signals to generate input.
• Axon—When the sum reaches a threshold value, neuron fires and the signal travels
down the axon to the other neurons.
• Synapses—The point of interconnection of one neuron with other neurons. The
amount of signal transmitted depend upon the strength (synaptic weights) of the
connections.
BIOLOGICAL NEURON MODEL
5
ARTIFICIAL NEURAL
NETWORKS
• An artificial neural network (ANN) is a
computational model based on the
structure and functions of biological
neural networks. Information that
flows through the network affects the
structure of the ANN because a neural
network changes - or learns, in a sense
- based on that input and output.
• ANNs are considered nonlinear
statistical data modeling tools where
the complex relationships between
inputs and outputs are modeled or
patterns are found.
• ANN is also known as a neural network.
6
7
8
• INPUT LAYER—contains those units which
receive input from the outside world on
which network will learn, recognize and
processed.
• OUTPUT LAYER—contains units that
respond to the information about how it’s
learned any task.
• HIDDEN LAYER—These units are in between
input and output layers. The job of hidden
layer is to transform the input into something
that output unit can use in some way.
• In most neural networks , hidden neuron is
fully connected to the every neuron in its
previous layer(input) and to the next layer
(output) layer.
ANN ARCHITECTURE
Types of Artificial Neural
Networks
• Multilayer perceptron (MLP)
• Convolutional neural network (CNN)
• Recursive neural network (RNN)
• Long short-term memory (LSTM)
• Recurrent neural network (RNN)
• Sequence-to-sequence models
• Shallow neural networks
• Feedforward Neural Network
9
• Supervised Learning—The training data is
input to the network, and the desired output
is known weights are adjusted until output
yields desired value.
• Unsupervised Learning—The input data is
used to train the network whose output is
known. The network classifies the input data
and adjusts the weight by feature extraction
in input data.
• Reinforcement Learning—Here the value of
the output is unknown, but the network
provides the feedback whether the output is
right or wrong. It is semi-supervised learning.
10
LEARNING
APPLICATIONS
11
Human Face
Recognition
Ridesharing Apps
Like Uber and Lyft
Handwriting
Recognition
Stock Exchange
Prediction
 Storing information on the entire network
 Ability to work with incomplete knowledge
 Having fault tolerance
 Parallel processing capability
 Having a memory distribution
12
ADVANTAGES
DISADVANTAGE
S
• Hardware dependence
• Unrecognized behavior of the
network
• The duration of the network is
unknown
• Difficulty of showing the issue to
the network
13
CONCLUSION
14
• Artificial neural networks are inspired by the learning processes that take place in biological
systems.
• Biological neural learning happens by the modification of the synaptic strength. Artificial neural
networks learn in the same way.
• The synapse strength modification rules for artificial neural networks can be derived by applying
mathematical optimisation methods.
• Learning tasks of artificial neural networks can be reformulated as function approximation tasks.
• Neural networks can be considered as nonlinear function approximating tools (i.e., linear
combinations of nonlinear basis functions), where the parameters of the networks should be
found by applying optimisation methods.
• The optimisation is done with respect to the approximation error measure.
• In general it is enough to have a single hidden layer neural network (MLP, RBF or other) to learn
the approximation of a nonlinear function. In such cases general optimisation can be applied to
find the change rules for the synaptic weights.
artificialneuralnetwork-200611082546.pptx

artificialneuralnetwork-200611082546.pptx

  • 1.
  • 2.
    2 CONTENTS • BIOLOGICAL NEURON MODEL •ARTIFICIAL NEURAL NETWORK • TYPES OF ANN • LEARNING • APPLICATIONS • ADVANTAGES • DISADVANTAGES • CONCLUSION
  • 3.
  • 4.
    4 • A neuroncarries electrical impulses. They are the basic units of the nervous system and its most important part is the brain. • Dendrite—It receives signals from other neurons. • Soma (cell body)—It sums all the incoming signals to generate input. • Axon—When the sum reaches a threshold value, neuron fires and the signal travels down the axon to the other neurons. • Synapses—The point of interconnection of one neuron with other neurons. The amount of signal transmitted depend upon the strength (synaptic weights) of the connections. BIOLOGICAL NEURON MODEL
  • 5.
    5 ARTIFICIAL NEURAL NETWORKS • Anartificial neural network (ANN) is a computational model based on the structure and functions of biological neural networks. Information that flows through the network affects the structure of the ANN because a neural network changes - or learns, in a sense - based on that input and output. • ANNs are considered nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found. • ANN is also known as a neural network.
  • 6.
  • 7.
  • 8.
    8 • INPUT LAYER—containsthose units which receive input from the outside world on which network will learn, recognize and processed. • OUTPUT LAYER—contains units that respond to the information about how it’s learned any task. • HIDDEN LAYER—These units are in between input and output layers. The job of hidden layer is to transform the input into something that output unit can use in some way. • In most neural networks , hidden neuron is fully connected to the every neuron in its previous layer(input) and to the next layer (output) layer. ANN ARCHITECTURE
  • 9.
    Types of ArtificialNeural Networks • Multilayer perceptron (MLP) • Convolutional neural network (CNN) • Recursive neural network (RNN) • Long short-term memory (LSTM) • Recurrent neural network (RNN) • Sequence-to-sequence models • Shallow neural networks • Feedforward Neural Network 9
  • 10.
    • Supervised Learning—Thetraining data is input to the network, and the desired output is known weights are adjusted until output yields desired value. • Unsupervised Learning—The input data is used to train the network whose output is known. The network classifies the input data and adjusts the weight by feature extraction in input data. • Reinforcement Learning—Here the value of the output is unknown, but the network provides the feedback whether the output is right or wrong. It is semi-supervised learning. 10 LEARNING
  • 11.
    APPLICATIONS 11 Human Face Recognition Ridesharing Apps LikeUber and Lyft Handwriting Recognition Stock Exchange Prediction
  • 12.
     Storing informationon the entire network  Ability to work with incomplete knowledge  Having fault tolerance  Parallel processing capability  Having a memory distribution 12 ADVANTAGES
  • 13.
    DISADVANTAGE S • Hardware dependence •Unrecognized behavior of the network • The duration of the network is unknown • Difficulty of showing the issue to the network 13
  • 14.
    CONCLUSION 14 • Artificial neuralnetworks are inspired by the learning processes that take place in biological systems. • Biological neural learning happens by the modification of the synaptic strength. Artificial neural networks learn in the same way. • The synapse strength modification rules for artificial neural networks can be derived by applying mathematical optimisation methods. • Learning tasks of artificial neural networks can be reformulated as function approximation tasks. • Neural networks can be considered as nonlinear function approximating tools (i.e., linear combinations of nonlinear basis functions), where the parameters of the networks should be found by applying optimisation methods. • The optimisation is done with respect to the approximation error measure. • In general it is enough to have a single hidden layer neural network (MLP, RBF or other) to learn the approximation of a nonlinear function. In such cases general optimisation can be applied to find the change rules for the synaptic weights.